var/home/core/zuul-output/0000755000175000017500000000000015144457243014536 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015144463464015504 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000242056415144463404020270 0ustar corecoregikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB )?KEڤ펯_ˎ6Ϸ7+%f?ᕷox[o8W56!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zI7~U Pm,UTV̙UΞg\ Ӵ-$}.Uۛއ0* TQ0Z%bb oHIl.f/M1FJdl!و4Gf#C2lIw]BPIjfkAubTI *JB4?PxQs# `LK3@g(C U {oLtiGgz֝$,z'vǛVB} eRB0R딏]dP>Li.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(^$0^@hH9%!40Jm>*Kdg?y7|&#)3+o,2s%R>!%*XC7Ln* wCƕH#FLzsѹ Xߛk׹1{,wŻ4v+(n^RϚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMkmi+2z5iݸ6C~z+_Ex$\}*9h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacm_jY-y`)͐o΁GWo(C U ?}aK+d&?>Y;ufʕ"uZ0EyT0: =XVy#iEW&q]#v0nFNV-9JrdK\D2s&[#bE(mV9ىN囋{V5e1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /WHc&)_`i=į`PÝr JovJw`纪}PSSii4wT (Dnm_`c46A>hPr0ιӦ q:Np8>R'8::8g'h"M{qT?6vqHP"P.dTrcD Yjz_aL_8};\N<:R€ N0RQ⚮FkeZ< )VCRQrC|}nw_~ܥ0~fgKAw^};fs)1K MޠPBUB1J{Ⱦ79`®3uO0T-Oy+tǭQI%Q$SiJ. 9F[L1c!zG|k{kEu+Q & "> 3J?5OͩLH.:;ߡ֖QʡCOx]*9W C;6)SCVOאUʇq )$ {SG!pN7,/M(.ΰdƛޜP16$ c:!%Piocej_H!CEF L훨bِp{!*({bʂAtĘ5dw9}ŒEanvVZ?c}!wO,ƍͩ?9} [oF2(Y}Q7^{E}xA|AŜt;y}=W<*e'&Ж0(ݕ`{az^su/x)W>OK(BSsǽҰ%>kh5nIYk'LVc(a<1mCޢmp.֣?5t罦X[nMcow&|||x:k/.EoV%#?%W۱`3fs䓯ҴgqmubIfp$HhtLzܝ6rq/nLN?2Ǒ|;C@,UѩJ:|n^/GSZ;m#Nvd?PqTcLQMhg:F[bTm!V`AqPaPheUJ& z?NwpGj{VjQS,؃I'[y~EQ(S +@gX*;RgXGdCgX JgX2*Ъ3:O7ǭ3ږA :}d,ZByXϯ&Ksg3["66hŢFD&iQCFd4%h= z{tKmdߟ9i {A.:Mw~^`X\u6|6rcIF3b9O:j 2IN…D% YCUI}~;XI썋Fqil><UKkZ{iqi :íy˧FR1u)X9 f΁U ~5batx|ELU:T'T[G*ݧ ؽZK̡O6rLmȰ (T$ n#b@hpj:˾ojs)M/8`$:) X+ҧSaۥzw}^P1J%+P:Dsƫ%z; +g 0հc0E) 3jƯ?e|miȄc̖F4BJ2ᮚ苮p(r%Q 6<$(Ӣ(RvA A-^dX?|!p+,ICE^fu `|M3J#BQȌ6DNnCˣ"F$/Qx%m&FK_7P|٢?I-RiAKoQrMI>QQ!'7h,sF\jzP\7:Q\)#s{p'ɂN$r;fVkv߸>6!<̅:xn<# -BȢ1I~ŋ-*|`В~_>ۅm}67X9z=Oa Am]fnޤ{"hd߃Ԉ|tLD3 7'yOc& LFs%B!sRE2K0p\0͙npV)̍F$X8a-bp)5,] Bo|ؖA]Y`-jyL'8>JJ{>źuMp(jL!M7uTźmr(Uxbbqe5rZ HҘ3ڴ(|e@ew>w3C=9k-{p>րd^T@eFZ#WWwYzK uK r؛6V L)auS6=`#(TO֙`mn Lv%7mSU@n_Vۀl9BIcSxlT![`[klzFض˪.l >7l@ΖLl gEj gWUDnr7AG;lU6ieabp៚U|,}S@t1:X _ .xI_7ve Z@7IX/C7@u BGڔE7M/k $q^hڧ};naU%~X!^C5Aw͢.@d!@dU}b? -ʏw |VvlK۴ymkiK% 0OFjT_kPW1mk%?\@R>XCl}b ,8; :.b9m]XaINE`!6uOhUuta^xN@˭d- T5 $4ذ:[a>֋&"_ }Oõϸ~rj uw\h~M il[ 2pCaOok.X0C?~[:^Pr򣏷y@/ڠ --i!M5mjozEƨ||Yt,=d#uЇ  l]չoݴmqV".lCqBѷ /![auPmpnEjus]2{2#b'$?T3{k>h+@]*pp桸]%nĴFԨlu |VXnq#r:kg_Q1,MNi˰ 7#`VCpᇽmpM+tWuk0 q /} 5 ¶]fXEj@5JcU_b@JS`wYmJ gEk2'0/> unKs^C6B WEt7M'#|kf1:X l]ABC {kanW{ 6 g`_w\|8Fjȡstuf%Plx3E#zmxfU S^ 3_`wRY}@ŹBz²?mК/mm}m"Gy4dl\)cb<>O0BďJrDd\TDFMEr~q#i}$y3.*j) qQa% |`bEۈ8S 95JͩA3SX~߃ʟ~㍖›f!OI1R~-6͘!?/Vvot4~6I@GNݖ-m[d<-l9fbn,'eO2sٟ+AWzw A<4 }w"*mj8{ P&Y#ErwHhL2cPr Wҭюky7aXt?2 'so fnHXx1o@0TmBLi0lhѦ* _9[3L`I,|J @xS}NEij]Qexx*lJF#+L@-ՑQz֬]")JC])"K{v@`<ۃ7|qk" L+Y*Ha)j~pu7ި!:E#s:ic.XC^wT/]n2'>^&pnapckL>2QQWo/ݻ<̍8)r`F!Woc0Xq0 R' eQ&Aѣzvw=e&".awfShWjÅD0JkBh]s9Ą|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?ɧ#stV \'xMgaSZNg8>e!^f%cYr]qs:"̊;isXa]d+"v=x7p.fZCg_Ys;pE&\U}ܫSh])qKYAـhhdEnU14&G * QIQs;rԩ.k83֖8Muqu_48dHܥlWW q>fu6+'}xu\Veelz`Zbym gp8펠ˋֆ:1IC8qٞ\vXçL ]X/r}7O}Wh,h ;RQ=]u00yiC۔I^3!?H6iUH:ô 4P$rT`%2Aq-֢׍qt=@x#~0)p# ы9'iri]ͪ/@繁qVGCڤr,DihB ,m 9 _$q3= A$IC"6g^4e`Xo(D*6"^eTh'4xpFڜe'fVQ7~'c L^ԯwIڣA.}H;Ë*׬=`^ 9]r鐃 -Dfi2|QwZk‹u^6DQ1&H凎c!n[mi3)WfsF:M"uҷs.1!뾧1%s,hQs|hx̗3%*v9(I;:'>uQ+v)vR/egBhAAdh]4H:nV$tHI98/)=mͭ ڐn}}~ק?g_6WĩDRc0]rY9'z .(jHI :{HG}HDN`h7@{jnE#[dz;n#y 9D*A$$"^)dVQ.(rO6ӟZw_Ȣaޒu'- ^_,G;U\cAAz7EtlLuoXuA}bT2H_*kIG?S(קjhg 5EF5uKkBYx-qCfqsn[?_r=V:х@mfVg,w}QJUtesYyt7Yr+"*DtO/o۷~|hw^5wE of7cꃱ.)7.u/}tPTGc 5tW> l/`I~>|灹mQ$>N |gZ ͜IH[RNOMTq~g d0/0Љ!yB.hH׽;}VLGp3I#8'xal&Ȑc$ d7?K6xAH1H#:f _tŒ^ 7 9AG ćͩ$.!б~N8i"1KФ\L7/,U@.ڮO?mُa ې!rGHw@56DǑq LA!&mYJ*ixz2*{_;IYJXFfQ* 0kA".mݡ"3`Rd1_u6d逖`7xGMf}k/⨼0Κ_pLq7k!dT x삖A7 u/~&ӄMu.<|yi I?@)XJ7{ޱ?Q]{#\4ZfR-dVaz./f+yGNMGOK?2_~3\z=y}^G$*A! IcuR.o=MZ9zu b#s9@*иrI@*qQN||Ix;I}&ݢ6ɢ}{]x}_o>Mm8S]~(EX{S yNw޹.7ka4q6B$츟 vz~T:n,Rp^:ZYUv`Ƌ-v|u>r,8.7uO`c Nc0%Ն R C%_ EV a"҅4 |T!DdǍ- .™5,V:;[g./0 +v䤗dWF >:֓[@ QPltsHtQ$J==O!;)>ohǖVa[|E7e0ϕw 6K!x^' ^7 l 2Jj.S֔(*CjaS:vp/N6I*x8"EȿQa[qVM/)fpOj4r!:V_IG^nILVG#A7jF}'qPU嗈M9VS;a+Mqܙ7'qpUۚ5Tnj ۝jlN$q:w$U>tL)NC*<` `)ĉJآS2 z]gQ)Bی:D`W&jDk\XD&?Y\9ȢG:${1`+i n8=%Ml%İȖb7AޗuV3A7ำqC*\qb'YpuHƩҬV nm=Ɂ-2=|5ʹ zi ' s|&*pNaJه5B5H:W2% `6MRR'xZtfC$1aH_dx$1'/v^ZZ4`9);q`F"d1v>ժbLGd~MP%m x52LMF9 E"A,S Vo}\"X.2< 5FB΢u.`aJ#Tk’"D#cuCXȉ4 ՖK(KP|dZ1&8{9rLnMRф%V Ng2K|`ot.GSGd oE'!B'Nb1{8LW^9KbN;sö!`0ݘ/l+1L#B8U֕&*?V6N{դ}Y(INBKhx2 *MOenT.a~.E jG)j{=u^K+Ȫcv/w#MivX :)ǪCZUnAS`SK6OSxa3 W; [>窜̀'n 3u0?K@BS %fee}i]>̤+*l:\歶!IZ5>H;2)N.w7?|+qU?^oå~4en\.c~X[s'wSSۘf .D s}Y,J[}Z=-/̍ݥ*n./cus}]\>\\^'qoeߐ?n=pW4/_5nMhL+-_]ȶM__WοNnnzΙk0hAϏa$ X )@VW)2&?ul_$,1=qOA;?U} 1^:XK \ )@+q(}* y 0< 5IDK5AhDK֠]4xMILv;Ull- }C| ]x ĉjlli˚| }xT-x.!>Zh?EV"sd!@БU ^p%pO3|B5=2怕nwRqR9~ i±za+HFNi>. EWz:V^&YEs5Ȭ N *7{!fRБBSۘ† Er/IGU}APQT]|XI X]FbKjKdO U6[3TTX)|*H'2U0:VunBl  `=/@ա06VNO8VGON@KgjyK?Wq1egI+ I.*F~L!Gf"LD&U 6tGd#fR*c ^tSLjnKS9 Ȼ \ >lr&}+̼d"I va,Jm_u)d靕َ| Vw85F3Liƙb<;dM-})C?Fw*IJ_3UG'+¨[9| >80\+ xJpΕ`p~mg˗%F Rg(6=/r+%a>w Ohght uЍaRs ^d6GXAf?V_mW puȇ S:tŴvŀU#-*mZ5k5r)_x*8ͼx@(k:_TX%[paRu~}#Ѥr %A%`;MxB[CzR怕#H% }8@*AM.SEhd,rKrʇ)br\+! s1CtӒNc_:F*`Nv;ogQFa2V%ZniE|nZ&-I,t*ώlo Lhnٓ'Xm R ˍ-~ά}hs\5TT%;}]|Bޙǖߔ 3\ a-`slԵ怕e7ːزoW|A\Qu&'9~ l|`pΕ [Q =r#vQu0 M.1%]vRat'IIc(Irw~Z"+A<sX4*X FVGA<^^7 vq&EwQű:؁6y\QbR9GuB/S5^fa;N(hz)}_vq@nu@$_DVH|08W12e_ʿd{xlzUܝlNDU j>zƖݗ&!jC`@ qэ-V Rt2m%K6dX)"]lj齔{oY:8VmS!:Wh#O0} :OVGL.xllT_oqqqLec2p;Ndck[ Rh6T#0H Q}ppS@ώ@#gƖ8sѹ e^ CZLu+."T#yrHhlكʼE-X'I^=bKߙԘ1"+< gb`[c1髰?(o$[eR6uOœ-m~)-&>883\6y 8V -qrG]~.3jsqY~ sjZ+9[rAJsT= 02ݬf¸9Xe>sY~ ae9} x* zjC.5Wg󵸊y!1U:pU!ƔCm-7^w]斻~[hW$k sE0ڊSq:+EKٕ|dvvjjy6 æ/ML-yz,ZlQ^oAn-})xǺǍ--qcl:WLg ӁvJ[ǧc~Of+8qpçco#rCtKӫce0!Y-+cxMK-H_2:Uu*corD~@N`#mR:?歼!IZ5>H;2ޤ:\Tq]_\_>e˲\oUQ\W?47Ww?KpwSSۘF,nK.\U?VEuY]^VW0R=<ު˜˻ x}[׎'|;c^ M77 |lk'Z|VZpsqL5 څB}>u)^v~,󿴝} 3+m𢛲Pz_Sp2auQAP*tLnIXA6L7 8UgKdT)*7>p{Pgi-b)>U}m*2@"G{yZ${˪A6yq>Elq*E< NX9@: Ih~|Y4sopp|v1f2춓t$ė]is8+*/ě@\3oNRoM@=~$%W D13S3~ht'FD(AKY<yDƪ0hE,ƪ$2IA WaXu U!JeegDYwoM9X "oFk="*Fʰ)zsܙ:Ί۳Ky/yL"cUDMGyʊ5Lu nD2$>_E ~gөmޅi__io͓uZX#PpbDFכ'k Kں{}ĴGzqN}Ǵ/_"y _*t_^ro*H!O V^Lf;Fe278 kMF8gMbɪq^0!?ۮ8ݫKT"$YTV V/`ZcCk?e OWKme5Jό}fn,&8pȲCXXp״|nP1vY8f~6O%i g]iy2RG0wؚL%Pf-Ȭ*YQ<}4|cKq`=ZDOúCH!#'f(Aye* " Q(D6X-ɿd)(V9yDt)٤^I\PšTJ0WQoL'BLu]TgtQ5YuYUT[U*W$I k1jT鳥hM!N ޔ{\BU~YfJ>0򸞨~L%RJ&qS7~ ^N_YYu%\ ] %sx\8W_Q0JL''Y?#uG ~xRXzY'g䕁/1io|ߔ40Q_tOmYO풩dT)q+"F,M`W"fINEB5=swX0sXqţOL=x=y/꫼|+<HFq(<p|~Kba`x0NE]h̕nYdg@΋(RvK(D3Aw.@qgNNL.Şj,=I Ӝ _żd'qIjyTF5pCA y~|"Dѭ`>xXv7w~@"EM2y @7w7'r[`6/Qxֿ7oGg'Aq)]xЩ Ȼ.B]O]1G Y(v {AQ>Mʊ`I8^xaA'⬍ԏ"jJp$u7;Sj~>dtrÁp! QTySF(GXxAήBK>=Faw???srseoBvGK,7$Y¾Y 'KA%LlŁ!\ bn>dƴ# kw wy,Lt((6*mKpmam㈽;v`x?v5q7I f&Y'}]!6ҧB" ;U0(CKm>doE2V/p֖7y^Gs\; %bPm ]YVq*E) 5>(Q2)ڂڨPkwUY"5CZT9OEGwPL>DrS;e V]vûSOo#T#G de+E4vSwy۰bR|ZqKQUwol?v] ?8߳T EDiW%kk(0|߃z{TFwϝWm#(,-Vv=x{^n{/0]SŕOن pW LDALՋ*Ur{^~e T3"Ey&sQYweQ9W[{;*?"@qM`o[!V:X (DFfx`!`3؏߽yp@!:K|٪vhf+aJAݿ';0s{Az%dH 87tM7(iXI5ҮkDኑTև(TDCl\p症1K*\hQԬpH,_$/HaS%+q~[}ix,6 4$ob̊2f`fRw2+YtBr_cH6.nBv^[]Jͼߣ*`~x}WHt+%fiRW P 4Guw6V**zKwtښ3r5;ѱ~C@BY%_ko_uXtRVyyۻr"v0GWSЭU)-/R#>=JY-q_b mKEUV䒨f8:o<}$Ou_4FWU?< LA,sDi$L.ܵ9<ɮEKi ~'ص #hJm7X]CapB`+\|X0է <+~?|\%F n43ySK! ICK5WTZ0YѾZ*Q%|\a]hVc25<*# WUo. lTTTo YL-@>gMKLK:lh}c7VO<rr~瑱Q/`.]Gsj|bQY|h"IXbU;pr:&F}Sڸ0[f|+ ԾUP -ebR{-US2Yb73^wlmQ< $mu"bd}c*O usP _%j@"T l=8iŦ/\lATT7,e!vBw1jM[uMWO<ۖ%yfݻ抰]9uƵ"U\쫵aSm|˵u.H&\*Ul:=-LƢLViX}-؁'C)!lՍ?8U gH9%f9mm(eDCaYyyfY3&IQ)၏LqAݥnk(=|2lj!W#[m{W^|P托UB=8|ԷL>]OQ_ɫ E#e[S8[Ѯ$>s=ݦ%6QEjZa$9sunܓݣ4խ(YҝS׊=x./l8fCbNr\/&0"1cCd7 ' Ɖ2:bwMKCu"GuZ:Jt\u]rK"WI@5Dtf@sT|ɫM.RWxmnŭ{yV5 @Vw3zkb@(XU]2HY+{ GޣVX6~dޢQ]ֺo_(>|ЋZC§e)97̄ja#̙Z}6o]Voi/9"4RA#}iP^ Zj;ڈ[R6'̨2j#:ڈ9 Х65j86"4:Ttɨ <ف@Q;I:vfˀ_t D|ԡΞ5e.~k?~ ?nZuPԩ i!V'mZµb0/]1')A1fo 3E)xSoG <ӱPpX~NLТ|qC;dFVPƤwB}nSW9Cq3Y(z*`|Y[H;2/xs^ew{1#zj }F=baWd 67>. "T6 Oq0@|saF8oݡ@ F弽Qw*d~WwtB΀>"Kuys%jCYѹ[wHA] o8+{۶EL`&3n$'-?߷dqvtG'A~U$źԂ=v)O񳨅wQ|(،iF>f+4ác}3 [:df[̌D 00j-Zќ[\$F1(:gO:9¢gwL]9Xۧ2k?ꪃq\f?@s?"nRMTE>&kwh$\q [A%">;س|88i&daW\t(X<ڑ ». P eZ4 CK0i=#57cR\ڶ>;ph<`N"Wԋ0h7 𗶀 Vŷքn-#*{ @N'I'"aمną;!54=s1#aQ3(vU@. ۉkMZ|/슯u Xֵ@Եe;~OkA:Rڊ  gr-(ptDݕbu!<> |ڑw]h<<0R|uJ%Vln &bntLMJޔe`ܳv4oA oPʗrESK,!\WY 2C2"I< `7lN ^PGj>U`z9p?]G:x zeپjhE$e̓qxڬϝ>uja7/A"ػdCy6Zx04CpU\SR|ha=h2쎻Ts.>7n~XLH*|_ut`zXjR\K_gUwOR!e)Z/'燯AKG%ځq/a~sNthf׃4v]¿6_@U$)zfv`h!BCzI@ǝOFEz 3|$2A}VğEdO'cШ8'QEpALiXm 5F jȩnff@˻׊[r{ ̮:1c倂p孵FG9V k H Rz?͝uLM>}Qfyi Ngʫy /׎ (HXj[d/{,898>-Ey:_& ^_US5t*Η⁃;pdeC [q|MPܨN'騈{٬?Aׯ__7hI/aWQyq'! CҖ{/]%[ Erf~`uGr^6 ?,!B`(A,uzÑ(B; D!<qڃqOzy=n;7~P7 R:CRyYo%zv_ä=b8mrWo<cVޛ, ~73yEHoi vXNjpOL:Gqcs#>2+Miu d K"?}Ho(k& VZ7Т( }AT%e3.bS7p ir 8Db ց$&ܰe=(#((w(拈z6 U*wSx1^â`]w.l6n`㫎 iڇAznGEaĆko66M"'koo>sy{~E5a$ȉ@~hi@lr7 )eO{8"X $}Y1+(q0mv}JnѪ+8L 7`[o; 1V]gB0r!{˸z|֊T=4hPx4 /:4`q e P#tB׆n3tAOUҭu}`]cn3lZl{ZHe;Rʶ Y%قPg{BH*>PwGB-V ۞Piz;mAJO#ߑP BUB- '4x[W [ʷ'?P#| BUB- '4|ᎄ]%s ˖X:Z<-nDcՋ{<3(-bZ\l4V=8L\UQt=&_9I. t|T2>F$.q4Xf -@7R7iI*4"_~AOٸSo\=8pܿ] 4+?T'0?s y_Ve>~ayφ{=iP6LJ͕Xrޅ8+&\+MMXtQik>Av7ˍ` oOEu{Ϊ%,,gM9o}n:ף ~ٜQQ$$)F`S1“Bf"afuqpW6{!ǫY<ڻ*Q$&֬FX=2|P]+N ,N{5fZDp;JP^O5r88H4z'A,&,n ?J(]ԬNZ&Bp&' b+ K>,hP k"aP c/7Z9G\~ )]2|Ifj)Mb20A}.&Br0o5kV@ KR 〾[0Ӻ<!rZKɮ{p{&hzzn2q#tY;:]<>^fX2B~"*fU^}HvDF2V,kWI4yV!NtՏpG~"?ר0FdϳJLT65ihk_ix@ۑB=^x}'g{Z #ߞo :i6?Č~ C|!k[nES,|}j![-r=f%Tx{J5!)IJ-N'-Gx=zwn}a;֏؅nfkVGR92 d )Fpb Rf1zyD,Z&,>Pv@^4JGIYgJr15i'85.PB p xB= rv)Q^AUyaP;hm?bGKD̍6Uɞ|\"Տ]3nVv<_gg{njxN˷N4..uB}??MTIO=+7Tj#=eѺK#!fZ -;UP Ѳ¨/g5Dayb2(al6di=@+Ht- <֔14:>6}B&Կ>N{'T`z -&F6 lRbFd!`qlQ勴`KM#I9#iC62J1`Ȍ4misߴ0i}V!]Y 1 )IEPO: s47H1a@rQwR̐x40q)CѪ5P6Di/Dl8yWǑd?3FyX쁅y-6[D}#eW oO(Cdʎž;^\;<ו'o jvׄU5#Wjׯvc/w\jAG5_}u/[@ʽy9gC?ցVBﰽ6l𛇸?Uyk ݟ]ѽNO+~LK|~F5 [ *!/ɗK~1:~a}1G|͟Ζ%.e:' ƥ\Zfmrou_xèsL3VyTϬW簞iӳ~lk}mxi$-]Vӊ nt [2]VT Ҫdfu}$4$8A(ؤjKXNF-UOh$X (Ĥ8(REp|p|Hmy|p삡Wc1,Xڑb-gX+1!A?M1G:12;*^ Ɋ,^A$-c-wu1p tʌb<CE@x6 #Q$+?hPcLĠ@ !hƘ G"0삝kCA%o@84*1Dhb[^j쁣cRHm=ǁ7PZdYl G1uQ8Rr<31k>:bp9 GӉI(ap@Ǧ n,|+< UjNY' OɘJ8pa̢5GH׼Uw z]aWiΉV /!c,@<&U!|l"mE >8axD.#ŦPtN\sƦDgN2Ȩ6jb"oԝ]p4cJk,l5j9@ UaXԜgsۚ)$ \j5/E cbQqmd6hCf祣ʶxxBbqY%"G) ) {n=pZ<-9'xƷdKz*jJE <)`_ ձSU.!lz1iWE҂ʱU154P j'Aƪ&BHb"mx2oG.aS>j7U\'Tx^b[V|0۔b6Xѽ+UjPX#sVT֜fGԟʽ>}'tsj%@$:_`Ϫl5O1Bd,/T f )N[E~^wQ5"1 N߾R3~`ҡ87Ai:ވ.8w`<3 j#503`c 1c"/ ֿ٩jØH[m}Z=nVMVe^)іy#Y9~:<>r 8(~=@si]anw+6.~We 9'uQ<3UdR*m2WyuU/jWַee>)ޗ@B;ny4v4"^gn0φ*Z/7򍭥OKMH:qbȹQlY YFW{+@#R3A솢HO6- y0.gP٠ʦצ9Q)(Nxqt[B3OU:hgbš(dX'vQi|f`0l0KP8&PE=TELJD[Ipϗ .&K=31#5NKN){wt&z}vQ-yyҼ׉;y]3JʴYZRQa2K![H:S3X-x\$Ƨ(1IFmB)Np8cC3q|-%@\J4bp%ۥ 5/1^ Bkcԡ(B~lFX(<z2"PHۇ邛?XWcTf`~O 9xd~] jpwN afxK %"ivQ'{3L;Y|2zpώġF@1mkvAVRjb/zp_ 3+xX #?ԏ=(Jacq W1L S"|Ur!Tn1]p79BS۴p,u AiN\3ޠd(%A@Uڰ?H3oꮛ0U,xxkV Q3aOLڄ}1)qEk͓sYB}ԆIjΨă6U[>9@ŅH&k"wE҉"|rsգ8DCZ?l;u.öy.jmSNkL]ԥˢ/)%oq〄T?]Tb^I&!,~Q)yeP?oю&?7J9^[Rr`oZr]mBI|"UTYZStHZLd3Jts&s鹿$B7ej̺(]5w3xaP{BUXQvE]e_)O6G*G|ܼ.8j>)wc*_:,ܴ>EQ5Ģ"v[S$al Iɖ5x" :~ޮ֫mY+:׽>ֽA9eMneKk3*ˉܒSZLP$uTyzy9۞حp!#Mݯ 3f%i뫃)R ^&hh~WnYC;nVɱ٨ er$F~88urNSf +Vs(S*kKwQ;CwZ>; Z38 9wi֗Dxzk~MJuq0E F 4Rι"A` @|XG[_h.XA ѶBa 5lDTMp<IjݲYrOH䢶Z) ^V<¹ ftgIo񓲩vqpVa\jNN@ t~Q1(ºGEn㻾oέ*LRmH׫vas! Lf%09WGc{i鄅]37$^B)[;cqH&)"?v-ÞuQ&o%MS[.$aY}@ix*mC(0ωpMJZM%? YWia_.K;#-U.e~f,rIV٬Ff"淳:FZ&۲M]5A|gCQ/% Md~Bk"XQ@؞K}cnu5MKm O Ef<6,qO @Fw)=8UF~)r׮?zЭ(u[[o$M3!zsw ^f#xb2rĭ,?_֒)*mqre{Z}݇xt?=uGBF > ;螷^_\:IB\aJ%i|y?zWo*[9]Q-FyieK~= %> kB+߃ZKb !jID+>&XVubj8|7GU@S°\'@b@9t5ZYTe/̵l85׆ F~g0+yWoL@%#p)cdGWat'TLw|uӉpwoP$~/eꪧ ¯ \? β>Pfύ;o/(P'J Kqk|x]{u5ٙ^)<IOyMQ!?d|Z}J/v^6>}6d0Y V'JDc;:ň*z w$y"Gv1W2?yNW`TjsG3W ~9UuuG(T}(Eh]9A/ 릭~YSw"*K {-lpf=NNuzXaت\UfV<7<oo*xM0/iH1Y5 Suk\HYU^:.3gN\R/PSG$QbƦ@#pkT>Qt/70bjcG*YUFWK #r4svVTMI\ fQr .r .4ȴ@iKd!l RH}ЀtraPY 췾3&slP/Ӟ9|M |Ӿ*Lն.`[7I*g$v FxU}^@k9z@; @\$Rhep},?ץf|u:6n(#N>%ghY*Ǔ?&.F.,L  XWE_|ov&rɿxǭix@{oUO ^ϔ?/d`݉vm۪wv)G&18 J9t~4)G/MI9l(=7}7ئYݪfTV!!X}dY"KS0 ԛTK )nX ; `ߣ^lEE lu1`\exC9%qPFp}RyoUrVete:X}.Io>Lඳ.~\Ms\nиW8HhrV䣐rO7L/;޸+FMR1CyހAfN̍ Q= BT%=U!pVVPŢgSKV5duY7AQڠiUP&i  A sMqoa}L6fMUA3$aaMs 1Aaˌf7Ҩk,В4-jNc蚤 Wi0Q7iBa*E 2f AQ)piL3 mМց^A7s b (N_(Ko^@W.)׬/Sr$X}s?$W|3R2)h4O?-JMڕ2 &(k1@~+ZW/h~vMP'yj2V cf I{^*וr+ќ/\O,ceOQ;ێ^*dbMo^ y+$8<恭|I>SE{xoEa?Nr$h9>ˊ];O/ʠS?tO;tO |4S0ސ_DFj6՘V'4iC]1ʙQ:On`PpӼY7؎'#xIpSb8)+H$#.D,ZkM2/wϝX(u+ LbޜL&ޠ@m:Sg%$YR! 8XIT5C"xWQJ2R#H $hĥJ JNh5nE4n Mqƺ9}HG9<"uQF85uV nb ISjb$<%nu+u+yT5#ΤQhk1(2\%3S:פs\wo5fԭfԭfHT9=9cM>H2jqR0D7-שбC?X!i†5nE4nKA6)c@շSqDo%`o'AeVJ\l@lru!CHT6v"cpH]3FpM0J4٤.x(:n ̡XJYɿfl|j\Q 0N Y{ *hѕ'`Jm׹/lagiqw\9NḺ-Wv'0:Oq3G\y0=FI _yj+s!s0J poyR8Qgcy14r b$& "ڈ pځ^wQO/N7IIj6RFXv8`v*$t+L?}&T0JM0 /~x7Ҁ~jq :iy|1GpR7\MVOu?IQ/.lк~-@N f\Ⱥt HM 8kRnW7IU >WO?||P[oUL6P-Aòߦ&\!y{0ԏg:2b4a/"?k݌p?~;/&Ϸ={Tc}9"FQ<@ĸF7K V6Y qP ɠG7}37ŔȤ>Aw<3+4LPQLQ* |: Lcb+bɀu˺=B^?Y=3{풢NKײַt<|'N8"Rm:'osOOf|9 WM_=vξ/'CӋAo8NyɿQ> T;ް3h6~v& BSݓ y1vIy4`'ŠbshxD'ỷn8@71Cܿq?O:N:ħ"yܛy~^ЬN LgQ:fFîgciWt_\Yr'ڂdTvEe!<0E9}l#lY0ǡ2(hExP*&Nc|p,1@˘+mչa`fCObgeZj6w_"?oOQj0Tpxq-VZ*R$G1e wYc [>9o%&Ƹ/Aк^C9Fgdn'Bl'Ah%JZ]0E择T>114MV"R1h;1 8&)q ¢Q36kaǛ`5(Pfbw̪wk,Ns|Wx&TR%i H"LO9  GPLQOYڈ6lA{Fq#6(n0CVV֡3\NɹLX.5DiR Ȉ+jRDk.` X9# &O6_@ZcSЈA,-͡?ɐj6HFEAn0 `6aFQ٩3nv0`qDN_-ϑ߿RfMUb]H+HPJ\,uRpd&B"-mJ!66MS-b[mR1JX9̬!(A"eJűE,2 xZSHh@ΨX %Bi$q Qm IIb#!Ԗ2fkEۈI4}qn'lH'4{i* gZxF_![`Dr,.N&H2X h6.%g2WEJ2e[6۴L1uuUuU)4ĦyPAT ,rPރyAԁ8k%SoQꥡJcڮL9\T-'@Is5c<, %xϹfej I Zx_VR% jȩas&(VӚ.>*yťθTLc\*D PT\%d& nь=Ԛb;0qf=2L,U1L4VS7U'TdÖ́5{r (ς%>)N2,52h*͛;" ?*ͅ=)6wSB2ϱRs.dhZ2iۺh[#ۚXEkE(7elTFZ"rU[#"3q uܤirW uXԁ׷#h/V}LY5}YՆ\o` )dAR >ʈH0n^7,#+Jy KTawl %uݗ ˎ0B2# + VCPlF*"#Z)?E rbwd8)75d\ՊS*+K;fjX7ֵь~$xʹ&e%cpHi0J]S\Ys%sqMQ=/t/).TCC2A`%(5P(s ThN`e#]E`w!ocn-If{r3KSZL#Yh_l~ۉ ֆ^5-ѴW oM2,}2,7?ˠGh]mix#nhu\`Mvj_[? uX]}f.?}%!~5Ҩn;$K9"\ *`{wGrN;Cl'}thb5mO+@vd'4T';FIJU93lѹ{YAn0KXCbI+[)&/5/QzeX#Md&!Z; XE Sس$t,6P9ݟ|ܜqeGz'hwpvv,H3RQK?}P=fbL*.Fֈqs(x( Dv-nc WJRv(#a7 H՗(aX,̘d!\A"YVE ?zi'iCFnǩ R4:PG\&a(U#4RR5:1kgXoCaVj \UC-r~QOWutx6NɶɭuRmu |<[-/[tej^5%\X_~_^9CBS:zSuU8TIeaQ<*b9t}}faY$e9ZcNbb_TT5WXMa6~ޚOԳ݉↓Z5p>)Qu0eg-'_T-.Vnb]!urIPCMЫYvweɡoݣZ֕S y$\`;*+$,XC>bkR1#&1 aؼfp|;w̼' d6hX]7v;=q+cGV38ݰ>8z01#h43D[hkMp;vV=@^rvqY{:9qSsEmf>LFT@x=Qs jI>c2\yk VVVqG|;MS_=[x *.*΂/Y$K,+ޙÁt֐hP zKoTSݳ&nOϏ_>7sUCkP/V׀;'?'Q+:d2Zً͟2's-;yb> [uBUܘ͛+֓We=A,q59#F{=TsXxAa"dY_=Iftfx7wDp .,EpEl՟L e0fNEm_Dr)GVw>?Hh>n.Y(%SaLg03 8@AO'nqOb*|Ypڧ?,;N*N^;h1hkYlm(D=^޺aE&ew6m[pߍwhUW10ԝ2V]8o1hwÊyV:p - Y58!@eƦ|D33όbؽ;DI{y^Q~tLH"w]D%63{1"0E멗b K)Oip*I,'_&)K`*yOq앢;Z/ dRT9=) u JU͹>(",, bPx閟dׯ"V:HVg.H_:ۤJ6tPs%tP }5t^z&ڼeWQWbT<:HJ^r V;) N(kuЗ8) (mj1C2ϟ.&i`yZAQ!lF˂8w:[zˎ G^*(+,|z!]8]^a}2J4/|<ل+=*i9j9/hk@.:up'4+_ >-'nG#;IGuf] +Moō {ek3h狈/b_"BrUp'W`V :tbn)e( o.ۚAe<#CiRuJq_s;Zi 94<>NfC#S!r/~>>&9־0Q%^ i[|=_{kiO@엓דk^{%jbrb:[9oj9H`_̓Ͳt&R Q"TNOd$Y}a\:X2ʥ$eCJy ~mwjG.*\WEu ~})U$x}eɿ/[9Tnj1+֦ zZV b"J'&HQv2JOP>[$]mcm%i:Y`CI6Z79wrЎy1o"k3 #LҸڈ`w<VxyNt(UC|J\y&nTZRaٴbO6jK/þ,M6cP=o ~~>+_~!n^/X+ F2[k aY=&|nN!l L.ph9qSJ/$`A WЮ'w#D{SZ]*Zz׽ݻmn԰Ui8`]>bqR3V]إz'6WW!t?kw0n2\œzOyUzl"-p`ٗU~M+, ]jQLëitR!ֶk‚7/w^Wv_ElµA(TjJdd2+Q͂?o7eCq5?-a~`ZׇT^߮ ʂA U(nFt%4QI0yV5a5?-$msf#mMu]X&a`xAdmw)#OjƌG>Uo^W͇nM# $7Cť#!*G\oc䖴\OjŌG>UZwVܬI\-#w!H@{"sʘJREUT !=$P-j޷R#D )iIUj>|n(55moKC?g%RGWЩ"F۟'2 r qSU>QAz4ȡ)7?ܕ!Yd6lpǪVMn^f l@E>qVo#MbX?! 'D9Xb P-& XAz%b|DOUoAYdx]䵱|D2HpWrG}2+ FG0(ʞ `nߧ2:PCp1I 3Gֆjrdqlq7pL+D1k-%(i (G>URWTh$Q0&g$#HΎӞ }T̠RqS:ۨ`sc*{ bT2 ύByW,Hd Ȩٕ|*zS#IN~MWE`r>Ek(I^H<Gh pSH9PSc%`$rKQi6NrI #w8Jl6ijȧ͈rxyTcl-noV13MaKaʞGO'∿\7c2:pu<٤ cN؋_'8i'W|]@wN@Ȱ`DrIRlz["UޔˌshK/Ah6D4sc.aE' ]1!:Q\+}re5 խA^"uPDF'#pF} q'jWn&pǏy~b_|l@gb@F. խ8fj#Դ!|>EtY m!bt7E>Uz>5wq 8[S+| 918<ёgl[(򩺵|0`+f>-O3^•$eD2b30|R%"}2Å^e&svĭ+RsANC:Ewu0SU>c  2ҥ'*Hɐ rCXjrx,4dBQy^ Y;KG3U˪Z)\^ehV!W>=R >v hD<(*&*%߱Ta>T,0S*)K3D:ѐ S ?VI◞ңSȧ\0ʙܫ^f>OޡP'1l̳QKy}f}Ȩy>a#b!%$pG| |y ȳ1]"R5qnknN$ D8,Uz9 kQCQqy匛=ԣ,<ɢ8J8Ͼ<.DVLۆ."9c܂g.xFOկ[`{s+'Ȩ8XS*^Tn¼!,bVBYm0#'4|n #Qn?3*t@FaGyq O|+l I/P-*GFk^^0#UVY &EU4k˸m]І"\;cۣK֗D=V )P*O$ǼF 6E<${l/ߤҾV;oB|~^58ca ]r{X?nKS+Ewe>S'X Ն)o\]9{#dT򖿋V@/JQz}> .m*N9>Ov_]]V[ij_wWi_Z򷫸]8_}]=TKNaߕ(TӲGߴȃG9q`d E7"&a@F|՝njLH4sDmo[:/|.bo\,0`,1udT\PdB |63-gDꦉc`0 q@FhQxl#ۊ)팅@aŘQ|Ͽ#Xt(6f4ÈN(ȨYtSyCët3 0ggu^ndP@ȨE>9 +ƀ(ZhrmsNfrt[DRژ`e5ţ!xt|gl>x=ܓE0CEg5EFOU>*pc,XȀ}#V4*ʩAd5]K.?WƇ;c,iT;%]a'f_eлt뫎m/i K5Zn `dsK#fyq \s>6w6cdT\Rḳ刣Ӓf@FŽi,1`a?ŸW_c˴YSܡnܧ ,Z;k/ʭXNiUPzav#݆¥k-X>g+`696H  V!uP*,`a#Z(㈱`Ha)5p\(XY\Fe3G`FOUZ.|5" !08l_"rbb&Eϭ142"(u xt&(*ՠl .ݾzؐSǬdי찮-r6t{唼 Y[5=| ަ]z4" ôV^PIE@&`)#0 =0E>UZɁP:/ *336y]zMD/=.o*k@F_VKSQbT(zo Э$XNrnR0bonO冸^SQ&Uof9u}FU¼흘grIceD0Ź4IR@ 4aR,LΘT5%5ҥw&C\MF%0wO.a E>U.9dKC{D!h=A1s@]a=@u@W0XѩYAؕmhy 5Xȿ2XD1R,apIpʥ5 r qT([2qJ2 G3{ȧjW**5ǘ2*6CzlvB#|"'K?c;iX39KfqLșmroɴ+Vd@ހYa#1T+,9'|g<ǂkT!sP]aiǜ,c:uVܝq^CUGrvr c#6[),'8\.)d=c;](zu|Ʀ ݾSy1MWP+:RzS惈QvF!<+S\Au"+]V #1e2*N0sr`, *h.U)Ku](ʝG\1ٻm$Ww Eo %E>d$pA@ITݲRwgz~Eɖv 4&V/UdJiN:k}Κ7"(`l]ȳ:Hk1A7?M+~۴k n~ +lϛͤhwiꋠI3}Fw^Rc;0:Q^3 =oGjT 8vC<4*rC_{ _[ڤ[††"mO^R:8ja8hG8Ȥ?TIP *<Ȩ;3Y \[K0RZPCm"^3AثcVUFσ8^ׂ"sw3!jձ' }Ղ-xؗV1 6V=%y-(blq!C82zMA(7Ȗb$0ܐ e=ýE E+4`f<. kAuWϗn(o'yEO]6lQMO;%˫37Uzտ~f@q~[0pl.xdiBtD  X|6;NV۩Fmf׿Tէm8~&nۿzVTf`+Nm6DR3zIK9t)};?hjecB0CF|.~TW3-mh-݃}'mIu< b|lv@^,o[@XΊ*jAT/<zP;'?lZ'7!u}aVK2x=K䑿qQ5LȷT?90n`}!Mxl)#ٗA\+Xi>a++eK[{'j+GSݶLm~'|-<zP\K2;N8))V^ohupVu ~?jB0}XGSͤtrIXϡ rȎF7z%7z{PI?W*ǯW+[ESრuö>pa`f:̣_@Lߴe9ޝḬX ~0y2&ӷ5 C% K(c=yN֍¦U0ݰ?LxȖĬ\>)!:wGabœܧv᭑ԫ InCC-ѓjC=f\dR9&"Bpѝ(yj e"oad *!cmqDHDYǬ-6M ۑ,dAiF^!uzb,M3.("_EtR8u+a:%˽AH7Ż~f *& 5hb<%p2)72Z'b8b^)G~R%Wx~0A2;T;T(SUb/vJ 0KZRoꞫNQ㭹Krmw4S1aL23 I&sB)2Y*/eTS9T:F#>ޢdf+#2}Gh/y#x/yOT>c-1r ĀG*;*{ve4꧘=U{G86K|G;i2FQ MlogYcMhf"Il딱8GoJw{ő_Ngҡn tjFfYXPF% ,G*.4DLCfCgCrA!+]1o_P/1L ?KQ#,|cpCH'9̣IHfB ;1M<<ӔN*NIE~Z&sjqNtm1_&2 5T)k#K_I™%(q\؟P'btL$*nb^ՠOծYR>-\kcpU\VęԦ0ےJ!1>y&Ϧtr>d>fTsA]1b+[!?Q:?}a";1-QI#2 cr q䧪\EN6( 켓0^(esĖ QS;1,?1D s袲5ْ UNR.tS8T#P~RO3 eO#ž3%h$$g$nv1f;c)c֦w=,R :\:ȕOe-bVVF ? ;k##*CEQtue1y:2vc=Y?#腀%fF1Nd@&J;spOx R3!$@g J$ʲQ2k 8&[X %b0G6a,I9K䐤 vi1rNc-؋R5^i:{1[x)vWM9C!.]íB"v!ʈ3.v$-fSx嫪2.Qz )=48kʹn.7<Bb$9Md&<&}ƫ19W.]\I B#3O-$\8sV@Z֤>ײu ޟR&B7;[ͧ&ݞ2p}/yO&5/ޚt!jybI_{s7@P LʳgV)Cow알4zNAl~cH$V&$#,5T dRmq64\;qniQ`dxlF.Rc h]ŧ:EY5+,:,10R>^PFe@T*!%WvkICO\I)3S2*zY _rrFNxo˵&\b9M{̼)9E<jw7"gXuOeeb ^r<֎,q p^]d6Bf'tdHce)[KLaЄi6 $Y?xt}}t0zc .<^DpDb;6r/`9ɃeW7?nr=F^HUrbn-I$Oa¢xwO~w^YTyE!F"͸/Üo"<"mFšb"E; yusM`esw6j1R(R?4f-ïSA|.`X}zsviiP{G)hCe?:õM.@jOfۥp(a}z,`WdTTܻgX>3wC*qU/Wee,5{ 6ey=9Xrd 4~^<[|:Ise{0a.-;,P%` 1vjq@.%7tҡq>aS տkվR&D^܎ 7zl},-<[IAWL鑐Ӛ8K ~yup_}Ūb=4??" K[7JۮaӠ}KZ(V=D=$`?0#nY^ͅy:d,plxՑd_UЦu1;Chs GP = 3͸ 6|0Y@S*Fm_Xѳ^ uq6b 4" e9cw[2" [*ͺXMvi<,4"86p5#~sgVb\u&7qK0|gm$7Y`qM$͗ ɶ#Km8~dnYlfÖY*XUGk @2 .rxQήD)J [nmp{ l.&q|>1n+씖<Ҥ+PTEm=xDV|>bXeh+ b%P3^Mz i QFQȍ&qܒo e:c+aQ 33FwFsZ$ӗO7KzuI^Cm1GV;]@qW:xEdTTtA-Rd {,'TTCNI_ed9[RDqU3R i[d~5ʱ :!`YOcx1hΞ s}؊?$Vi@h ;,6tDS,msT)xzB5 A 4T>J$N20DMBJ_96Vj'CvBpX.XwV|D)6l;N puH#eE]WwiTMSnekV//i؛@%#nG$S;KBU%z1쿏^K:5>}8OnkϧB'qH`2esѡBZ:l%_NSmD)D}D3@G8v.qL&ΎijOu3zmI=8{go/<dygt(+%@S$egg(>-f.@L h+PJ N}f1b{T62_V-*`}P38/ )c{pLqy$oNe#((2.bM )^vI͓\m uc̥' aѦr1ii$ .ILDm+A%L[I7ZIJ*.g&<8a8JEq-h?k<ƺ \Ăl3A.Z+ tZ8pn5kI*䄼l_Nݸzo'qj=SRĹ r%Qc!] 7#ѫ8WP.fjQř7{}p;ERǺC帀8DRjg(*@ 1A1UHڃUNX8yN! 8Lj Gw`7 "Ͽ9%A'zU ~:?T" ̀J}` F!C'o-B0?sj8:ŨDC4%" V(;-2g K`ʨ gVhW5 *40 8T$)c2E"-E,"J'v xz&@4IڲOy_"p^"cNh$E59"Zn_]+B;^R"Z\7U)+uY, }u|y)cڐ@|-@g/{C"tjʦ٩ 3M\5={[%`HRhima>MFxovÛ|PK][9ן<}GIKp~~6##"T02aN#?0kPFΫfFknhl?4h'p,{,|x Mynw0mϫu?MEXy2^+oL=tX.]XדO,fZ x-}%8{vzi,tM]@y=5ɪV$5 ߥG.n- +at((JJgskH<>=]Dy 3F+Vԓ鬿HK6R4~—q\"Lϖx7UӇd'_ 5o_g˺0 =݆騊pyO zRӑȵ0MQQGYOxvٟuk#~.fI/ Z~hqѣ}64b_R5y(Ӯ=Oӹn@^m +B61dX3W> !n,E֥H"UɅe)Н S NUƶxy $2 ;}`^"P{CqEp" gb#Y,?Kj_MFmxL=.ƕh>VoyuM7l>5;^ xmmctO{}#\9_)?v^7:mjn]P4Y뛇)@oC -fBH.mg7sb|ˌz@ء_d4N&I>9g 7ً^ +3)&W+&zUj1I 4(Z!*`.pT80-d5@wAωY[ĉ6VJx*i$‘BBc ]2C~_%3jUh_̨1<$*_* ZFE^xd<恹d?,kAՈ̩del~X{ўy N!q8grE R&q2gT S!c_֬v[8/ao.w{ouAz(}},{}ӂ*Qr[/p g@U7\dSJAe^@FoN-il۵ema\-9!6*#@K#TtӿE=;E_{y:Ei +'EEp2E^akA2J{ɣ$KD,rzz|KBYozl;g*B)$"\xs szZ {maid38ɢm<'% +/9udC0vnTd QUЈxkSDF&2g.(5"`ShT`I@*[- PILcI+rk4*6G#cEJZXeǨ2h kC1 P6Lr0]"S&EyF,vDD݄[hsApsk7i `QiADXѸ$Gc`[~)nN0kյ/_{ 4ً:m㢫$guw_$ )trI(%^!uw!2ˁJdm.%79 q>Ƶ:/~~H5;>&i(U IoCFn#!̗Zdj"[_flAfo7=aYdnb1䓢m̞&[owbvwCzԥD|˯YI \Nf!HIF) BK lɵ*T޶]U`6 |V> ":5=?)2b|ʋCSt.ڃW(m[ѯGݬ'mftH@W{pLvkWR|BK/ )irhI]WB$k%s7fsV݁%Cִ' KkX<_R2 R:.OTDeܱqshVyXy`kW|m>Lc6qF{ZST|j$ԏ7k-GY}Ts?ە+A PICKT,QѮ+t rM45b#!CJ ek?I-ݧ> U*Kx2j{%Lp7ߍ%ocՏ ;D_GEm]h`W2zC剺Z۠ʺJgőx7Kvk(fDzԸ9A>OsT:&@7W8!f&w)+t8VD"HP5z|{<5EZ?@1EaraR:.g$5dƸPNKy8ENhmN^m%9 {w? ?{~@U ) ]@).*c;rGm H?yeZ{ܸ_ie0a X,J%EU{YRKfIdnuo)>%#~$A kIkB8\DŽu ]ߺVYX)e |7}y=];,l ѧ5fp_oxw>64ݕw뵘ûNSOv܄(]ljn^^LQIJ;i'}61mW},r'IzЪw,턅NҥZ~szρ& n_OpoJ5 A(cVq-HDּ8bUMS?URaVt7Y],|X6L+.܏E+Tz450dv_On㝯eX!wą񴊊..䢅vq鸳ͅtS_Lf2M><&WX7_?{k1MF}6-z›:vwb¢]b4n~ga,fž.<\bv6;`GbJ뽏هFM56mїӦ-.^A tGs2^on6y9~0j­Lts7?[|zj3=<~Oo sbqi n`߷ʎqs* Town} (^s(Aě_%˶5o^&T}?(^la޴DA싶] a31!&QșJ;9d^®s¦]4~&'iRJN(b:LH10JRN+EFބ/V|O⅕]gZG%ԯh9~-bIBP۽}mv6Ec̡Šw|Φsϟ| f ]0 qA9r;$INֆE¤uU:n &Y/)g{z Ï?b [&7l<B d]vv9Q7hmKj_~&tš㗐Z3ɟ4TfY`"Ƽ(s怔'&ذ7~~TO:aP{h<%љ${~?y98 )J聪^m%ǔePK"q/˫ "C߸J`f*eNڹJ+ǍQ"}e*Lu{a 43VV~*EO4~Jk#SC6^X /8t|{3nIhTcޗvГgT_Xd׷n1x4Z Bt62\{KRu.Rt4i/4Ã| ާ8kքT; 6' `¿*gX{+^Zz#Q>P;{}}Χ3":3~qj6^3^hș:w3M)Sb^J@\R@VrlPyu2/8\+Ip/ee(ܷsY_VӇMĔ=E72:gz<Ηzilo|Ύ՗k~5rFF>ɮ_q؞-DcLԚşF$I?ba} :FG~~x+}f?ҫߏQF耦SJMcMtXjCϛg}?oy50 '{gtc<1!jFzn"^ x.T̅ .0Bkϧ_2$_!LVsj쉏& bT(.R[c9нrbs~cClKc6aro Ksz;hj)ZD햿z>87 ,^/S{{Pl6)+L_XcaFE[$Wb'־~ x[UHmVRg%Fbό,:Z_:}2sc5.)qkg\A6<-(!6-W_!xP B T^֊P!eA<+DB%!Ͽ~/{BLx@O'X(dbo嶶!`gR?tknJ/ 7y&e.1cqȫxA4?X!0(xPa|ۛrYdZ{+ Vw"#VqM(*GGapChdvy8$+-rk/Hc Q1>pJ;pPN}Ǒ V*mcrn\bLlRTHcOǢtB/Z~CnaI@cJ0N0(@P +3 hvtBr`RpqME ha÷#T5 m . 8ް&#F;`l 2+exf52FQĬ%F[ xvqwA3{tGφ׳tp`rar&8J~XU=: cC &ux 8C^d$̰{R`''] 4G(4}6=^BKq3%Od3f{u>c*&@a^5T =c.u^}LG*Fr=FD5rNԈc =: #Ч N2zN4Jq4 k @ ƎiL,.}0ZJdJד0K)ǵGGYp(<`)\Qai+80XY;?%, *,2urX{68gyFlX( y< ty砣PnqZYgTGb 7h"Vy7nqyKUM:I( 9z=*OCg+u-qGGap*`vWrPU^Q]yc-ii0|<8n 5ӫ0IG%iCpǜ\H ,U0-4ƕrZn/uOoM sSY8HsE , VQN?y>R"n8P7 ɵsn 2xhUhgc !NVaXWiȺDgH}n~4h7Q$HrP;UN:O( N>FM;$Pp\d_ZU,&kX+=zoGixhZDyq[DסQR: #;O0yVQ1|<eGLWH)رs)mBxjv9‘VGGap4˿=p`5uImW0 0Wbq08 㾼` ÈU B0RСs?}1ZW qT$[M]}:Js: YqQyvg} cLsۊy9! c"Ք `*^m-u'_X9i rY@{x$$Ln-:~iGQq!^Q?! 7l ?aFr˂ uzVr5DWэI9Nc ~xg>J61<'&P98Hrzk&KxЃH)gGHP{aGG&ns-bp]Qz;|v08ԇV_҆1'e5 A+*`k3*}gmR1fcFVO߯0>e 4@ҧ*jI+"-`e9}+Mpd84j)Q/DV3U@G KVWI+مPzpcbHY$)kCnY&N$Wɜzq[|aH 0c:B !СiNOCq CI\fJf0fXǼqk9nz 1D( PZ#CX58|OO"2c4-{#L4Omk" P,؀&͑W4)ed@5yp$r1p|%8Iz "2lgt깻KAaX2;c0dn AAY]n2kE^֊x]((S+avRGYpN")rwȑ2?OJrAO1^'#WkJv=:ʂ3=0&9 6r?Wwm\Li]+H]aGvRGap2d<$IDyPI"MO7&. L6h^kޕ`b$YĽ;! =ՎKQu{$Ieے^3bJa?zԟr¿GGap2r:(6H YIFw=}%#Q^PrGGapk0d#Qs8˱%m@ ",/˴/e~ 996z6R`ؒ^75mo/MVH^̼mòal_ŏ^-:i.Guuz4kY;㉸|u^+3V߿lct777#?WF";?쐳"\ç\sM\H^K[:L}W}ޢo],2?m~:ihB~@;} ~_Nྋk m6ԛR|mws;i+Fk1ل)"C'W)|';nboanMjtq7}3GⷧnhL'; Ǵ]ͳwm#_!M! ؝Mfva/"KQvդJ,ڢaNWo2拱Qy[6LRe\V2> Ck}<I~zx@egpѧcB)7s+:2,"1rbrX+/h$yj9却dia%@.յyL諧0w`4(@d~G|w%োuD}-=>u?H yMGIG-K@2&> "j m^ȹ%b Jp{6JI4V ƙARL:H4 &?=^HL$ C!fH ^ wE%Kv@HvX Հ  ,C_D}n]gyFl/V9bRz04{fr%)@;T3.83(|ƣp;`BΦ ֫?hG,~ݘwSs0$[8)q Kb;}QR[$^15[.wcQ ~#"᧠78G`R + .7sN TFˤT.%ٯi6oSS^ OfeU+fzMdzQFacJcx`ªf>o*)YgSbOIvj~'`lxv./&GDktqu:Z#{-aWW+kAyu t 4~]`;=9Ϩʨ/x:ںIڵm;Q-`V-@ !ۏ^<+G%)^}PRb%:85Jwړj//2ka"^(gښj mC< sA9K,[Y)69ӀXdWPe]AF˺]AFWPJ%ޠZ#EQah~?|VA>X;~*=!<&|qۅة拭!5l \c_ 5*dO)z~})=/ r B($<3@cWL#,8ZI}ډ_';h\,_xUǶsU%ZV^ uHl(```````Uk -!eL8C.G%sȭ28fl'>%%.MR7DS;r'!f2ېLN=P6GىBjy>%n #Ș ~c%%˜謉%&zWy~l/AG[f jlLt `.X.ߛ끩i$N'W/ȗ]`?|"˲)sw"S&#RIo #mcq-a 3DsN2w d@t9$yBYHw:"%1,ScfEJv!pQA$d>VK= y%t<>,aeZX?Ǝ+,zcpi:x l44ˌ ,F#%!V0, ;&_\\gij"HH"GQ:r"H@ZGI׀N]ӛaR#"5'{9j5Bk(t*'cTjkq-:K8*F]Rw@2pΫ*\dm ϳ" c^ N jlt^j2'ٿFO'|lxUW`!Y̌|1\sЎg瓵Wr R*$Y^IUZ rjc[ɸ -~,$iq0 lQ-ڋW-" x)`8@n)˧!iaAkPR@ĸeW1m_0FA/[BqTgKlA-8[Oo n'fo#EdIt]jw/ћީbz(¹^^|~o(#:mqkrTapqr9\sVDǞ,㧱 F ek)MED/,ųaQ]ԜG65{h$j XP1T:xߧLYCWk  Gtܙ< y L8|HRѮqꯟmvZlGF/)+XUݱ')17"&jJ*ȘFRFXF^I,ypۄ)zny48iyXkB0=bꩵ| ߙ˻FTDFa\ǖ=׊cV&(F$Rn@iEۉ6av19}wf1j'< ^F:)(w Ks'JA%Ik(vf?lL:>ly Co/Z_~z/%P̋޼:7J"=z~j)??zefyAhiWks,Mܦ֩ڛL6e-$mR I%<1卍3@(DKj Ux{B*Ru`K‡"HJ< g-sHQDBQń1i;p. IRu@f`@L)[Dq8ODhB *S6)=Xi2Mhʽ61eEhU\&p)QGeR6Z*/i"M#JKv)eUĂzC#Lq PJ20Q ; Aiu<fȺ|rw&6ה[ 'uٌ[Z6#'w=i,E_P>2LH#H ӛ:r1N %*'a}oyqie TJyaR愄Aε! 13ˌ3mUJҡ)*[ }Dw-<~@ddli*wIFVuA:f b}E#@TRS/:mMry1ԇ$;m[Δq""@x\ p4ccz r9Ѫ f ^ +< ঩ %P8G"W BJ(8he9km8 mB70*~(}bcHɂNA L:GRQFDxD6E\Q7ߘk-*HY ."ՂP#.8eNqwjlIPV'߅uYGO%MṵǍH##HG=G;AeO$ke7#&f ʙ6X2f]8kA]UT3(OqzCI[{[wLa"|C&iÞDI=7|kj(A2@a/c9VfloۭVKҥ$9H]}%xX'8ms bt9,1kbc'{s cg4!w%n~<]N[\/Kt [_*A'"? sw-tKR>hO$JhO7=Ay_o*r 7̇\YA 5' `.Ki󩗯? "Aq+(rx|ah{%" ;q!Ra'vRa'vRa'vrmRa'v  ;dN*N*N*L+T؉v ; ; ; c05fN*N*N*NRNkN*4ITITITITITIT؉ ; ; ;y{wʥRա|>TAB] զHV ~g/*G‰"Y z ̀{am{k5sqaS9_O(tr{džt|YF͐z 7A?.WGƭ~ %l/y hE" T D{gBTu#>[-R"(XrN]ew_GV\ϣx{QQ{_d̿0kRb<0me+gJ x|ȕ5!Psh:4NnyYHLq+(rx|ah{%"9+% b@DADADAI%˘4N<Ә"ZnIC4t^;yQR=)K߲OpR5( qܥŢ3q.O],U8rU pفu*ɹhe&&WUоVs2)g,r;tNzމ*- 2:P9r(4DHr0wv;cyݙ}ڽ>> Cm'8t4zxlq*E9:*8 hu\{#@JpSf[Mm0Ur!λ&W$X85/ywFK,RR;g>a9OmJ߬Kri}[9K{~3r92` axq

Lϖ/W{+_l-{-gߟg[+zz5|s0j-Z ė΂~& \tRB+>.城= zb#?=O|3.n1<<;ln.Vyy,9ГT[;n8[4_=3iοLҪaH딂߉^"K;;>۵D[Vy+r31tn=+쮫jc{}+u12uf4בhwutR oor';uVEv={Ug1fjYV#; #z7.X7 kcEoq&/{̬U/_Ԟs5JNò G.>(gf t)by|43 ɸ[/T[$wǭ osz(6?V[5reNo]0{iyr\7sR!u]ؾiQآZm^烓i]6V^[s0_uw_MLiU/5yb]w[[<:]/,}zx >fZ=ܽvot;NͭN{ 1_~9,aLgG$f !i> |vCJhdG]|r׺G`N_ Ϗ+i̛w%1}Y4E|e.BT9kI0B5-ri7ͺAWއbh 8^%DS=0,_jV#紏Ά\́0<&kGm1 [ƹ(KU`BaL!iaax޶j8icؖ}8yy CB1r&2ɣ>KY^{,oӎ-QY$Ae48<$'Fe`Qb^0+X^3>caKKyWc Fg{, 4I<;5Wl-E*`  CoKtL6Ü̑)h)D #KG/uRSxL̒*|!Q+b%0_5.QzJLS X^)=)]cڑ 8/gwXp! >`N"e2r2Vi2Gw#2)h)zd7 O (3;,o@5+VA*MAv &j9Go{, O{c0x Rl&1-A6v{,o1ޏ#ebegx6@A,u=FwN y+w-p u)8~@_*f"6+Ӓ$A&ρ0FK k䘓!W8Z|R.9#XVlőncaxLjG/uHDmYɲ б*kY{,= Sp`*vVaJ3;, pѹj=gR^h2Zxj&X^ZT0 i[@fJ 0|RgZXoUii67ZΌ|-jtoҋѲ<'dkaaxɭ '&$)iŜW(mJp ef0<7zӏXMsC(X֛B.Z~4&bReC谰~l}Z]1oZz^ȧ5zZ^N)kn.3md&q"-3{=Fl@d\<|DqYy C[7و|M#/1d*>caxghV6UkEDž'v<4caxǹ Tq^a sx[( us;, }//u=4W4.4&*`BR1 Mx ^pݪE`"+A<=FW  ZdYʘmr)60 MكE%R)p#H.*l3Ӿ~kݠ]5,NZ\[7SˆIρ0jG.LDMjosLcaxˇW0E fsM yeNd`dN3| D`d!B*al\˽ C;=!&͕OD>hY19[ #{χRcuKPbJPqU1gi{ASM9`"|zX^j7|<1-Aj`ZM-&|J-Fx]hEւ9nk1TNXCWjc"-1C.9Y^gz,OuFu~%fDQetzYf #LFxn8z "D8quXu_ֱڗVټ|+͗MCv SҵyF=0?ٻ޶dU^(}vnl`A^dI#J8|&)%K+aѬUQKVR[MG\R Y +%n)gþ@%#:TfJ=Fq1xX4!p2}$/w8 . ,\9F氄RBa!UepuXUd$l^4ᥒoЙ'Y]q<}bJ!Ho&fNM?Ggafc9/>,.-Y>L,) HS&ro)NE(]YZ^LuX<84tl;h>>[z75iy>}wRK:~tf3oƾwQT}h>}Fв>EAQ9\tUSoaa=i'j_jh(4E`ye3n:|d*网1Ip i8:)B*=B99F;˗D@,mn.RSܦH}~V:1&3P j4!-Syf\F)}LLLNb4/Qi$0g G3h5i|mzRy|4%}uVm򵢴&[otU| Ųk:ӆ|moq+ M E7 <."MvIS:$ y+$pe E0ģ=X܍'fijCֲt :/FK򒼾d' *ױ+:;avymQv9;.2`~-anꠡU22*&;7N/a )8s$Il0-%tH,go΀NgIR!Z2Ҳ%ٰhvx gsCLT 7A0p=,{:eeWt0>,ϑ {e>ÿT3s[&ĤSv)ȏp㵑*!v ?XzsXm:nDel>*Wݚٯ/Tl^ -bvigX#O: C!fH ^ 1[,} *Jnu%ԃ\{BWgihĝ-T6diN>*UZ>D,|7 rޫ+ѧhJ]h4B,~6 1Mμ+KLz?Sq|uS8X9* |Xywtx9(R Ӗ@lP UMf-Qӎ ;H JfzT7pvoGe]OV߯eu(k+ʺ:е6 ]vHAT;CZ&x|7 3 z;WHT݋ ߂[ƶ6~^{1ɈSuYc#Ly"EОhIA Wx/KexUǒE*L<0a*URR%nŜ(aH˯%[#gkIЉZ Punt#h)b{$Z$K\^v{JO_x[g{Lz`g8[B\ ̰4 V(*Ly\脷۷ 3w\|! yɖ2hj\B̛5oȾє43mK`Cg-y3֓w(ӥK6A58ʝ(BSϘ8P=d:FDIj"}KnǗ|7ޢ{I7|=^Yq=z}][T"z8ߝ+{{Cq.:\w>4_>09&é6ȹQjǢ`;X+\$JOƶ}`vU =&g؞P6vƎpd[,^\:0ާwwMݗhߵXK8~AJ@ςs}p9 Q[me=6q̤]UIw Z'GƑ1A2' J.)DgMQJ$ _enQ0!S&aJT(] a=Е\zd/cvDۜr[,!RIo A-1` 9'A;d)E8-2u8<,fpv:RXbHitL*tB 3â`i%A;DX|  X200 Aps;ꍑ k44ˌ ,F#%!V0, ;O%U龖]r)UU(9H@ZGI׀NSaRh "5'{|9j 5Bk(t*'cTY\~ps7gixX(9TY̝UԹ<~.;k0*-<ˊ0y18Jή(ѫRd6N-T ͧRr/~(=5W,Qӿb!YgS(+)&uJa`R!xNnV73w0]ۊ%?Lƥ[IVIR˲OsEM(Zh烡[DG2yRq粠Ua@{g)]P11.=WN2aPcd/*7@ke3?0l&ٯ>>)/p-I*~x5J] U&I"zX܏7z()pfbY/ u^[T"%|] !ZrprI5+}Hz":X,'㧱 FU)2@6qĔKEyE9ߚ,ųax}j#nޏ=F4j Xr>~)$c%f>;MnW~(ڙ˦xim0#?8j8sg$8{pɥ,gw/ L튙a?Ng~zցSj{9bUeźSҁ17"&jJ* +1%`P"eCCom-#,I<7R &LȥXxls8FHYl1jelDb*\$swv͏*7vk7A^<˦Yu {px>#=r\}r~K֍g{X"~SLj|6V/gy;})ΌZסbR(Ψy,mN{~/NVq=-41(d)`#){YV|L&6j*z3dX7KEqP u^r3oʹ\f;٬jg6j_կ{(_7W--<~@1E 9]}=3Cm߹;}۾swn嶗t1AiKVH"A1^TޙLSq$"7~t| on79SZBo[]#y&+I)'Q=Z M 2ՙZ]LWA U*td 2]o[% :]LWA U*tdEQ ẮLWA U*tduLЇ4'VyfUNR8h!T) +V33wzC?wLIS iLkstQc.y~of,Wg8O/#= ހ.py jF[@Y8,6U1:Hk 7ɹ1ٻ8+>0/7~`X[`c+IIwn6TEEjͪȈs2O]-BL|5ւ:iƌf͋N1ToPtxԟƴu1eYA1V9"6\\OOO~jy~q{j9V[z;D/fKFX0q^-F~=y;74+XkW7i0 dRiyX*(a㺷z3V?fQ}M r58.| Ǝ9H~=u5^5b‡WTp6 vanYA?egXK]iKXawH`qkQ[欗o^-Tj $8*m}˰]X:y_oTG OGo{o{o{o{o{o{o{Jfvͺu˸b}:d>5(21冲ZSD7wsM%rd$b ld %oqͭRR(OvNk] ˆP N%c0$Kc]t &Y:M_|'b5HJ[׌R8* ܫb9X˄b &3)e%l|SVDG2eQ}}QXW=„-H}ח,G!uHJtxvaӎ; MJ#TAtLaaf>gOb15ZErlM~%|m \B$bvx] =0ioGF;~ fC Q%C(d[5~*-Rb67'M,3 Mwc5H؉2VM`rse_G;0u|O)ks;rVc1zֺuړ(.zCR [AioeM1jd䒳"ac4VzV JBuX%(xs$v@$2S 2)4\BmxH v+ (*T :Y-X shlguF9If $JR-(jl]y$<˓cm̭&t9M8BY#.P iXG Wk&@D*7G{t95>wu2ױmSҷy%0x6Fx2= R~Ŭ<$J,xҗ 7 EՒnR"2{Ex(%g&b%scAXx %@A@!Mv%NK |0bFQ F,nZZ"΃_xDxQ8$Ʈݴ6 3k)ȂJ!2~\ w`Dd*p(y` üO(@AV8K$@ DES6@k.OzWHg,YDI+f#o R)BM~VDEK,6i,e ԿYn$,B L`_7nkU bȔZ2`:FæicA[n:>yq2&|5iy7~]\$ T42qpqt3(D6Q\Cؘ" o M#FŨM[cV5 U+uk5p7LJxKTlЦĐ 9P.3{_hgr9V3*P"P`@H H**m􊀔kzL gc Bf`+BJ1$'WW$#ooPg].# U {P(Ŵ(c)U2QLFB,&zg5(d 8W Q0|0AT80v*`cg2fn4 ǚ*Ei H>{*3NFR!W0-LB欭Oւ?!]u~5(JS>(^}057Sწ%V$N^HL)6VP0+EXAIP G̀z"reT^HO0%7aBF,Skzj&`PRpqHBUKk LCicYc6FJRD>@SFuL!jI5h:rө K]ฒ#f LBFqUfk߮hDo2B [!= P GV/i;ً.7CZby Qq!dÒI\<@!!s .|̳{7?z7q`]^› LɁ@;V"2oQP>VK'4jRewW=Pվ@A=4vegX 2듙 o<)99OJr}I|,Q_N,8ʾqgF$x(~ѹʫ'X[5Zw@]) `ɹvmo|fTsq/iVOsy擋cGHۄɘ|+J{g Qj1K-TLf5.'x˸MC٠V|yRIwi^ aKCN" kMiE;n*)]7j.( ֓ctfoDjl"}tP>&>cЌ1\yLBQ{Qh=}ͷ"X~0Ћ2f 6-9]t!*!,FUS `mLw٧c S抐yvzXjQW++߂$XzCBq|&^Rϗ#/=~^g~ru󵒴;zK&k~}va1 ?[͸~'qhm1$r$c1quc6@& H:5Jrظ-8ob\oG3\9uǹ[p~/O'^upח<@'{N^}w^y}qn˪ٸ >]/_-0kC5=mTW]5wUj~W]5wUj~W]5wUj~W]5wUj~W]5wUj~W]5wݪ|v%'fvn?Q3K9=E+% 8m  ]"BqF攆H2gh+ԆʦD<>}hpo=[ `wO7&o^ܣ8xrtܗ WGzZNN/ևg;w6x[AmONslxت44stK,uѺk7D>zWFPxj zztq5SO<+35˱y}y;qEZF.3m)xryK.ua~ Erq|r >S U>ޖwinr.N>?n~:'mͤ*:645YF{Үo\ x!H߈ 6o溃7bkzF2o] i6 t.@@] t.@@] t.@@] t.@@] t.@@] t.@@Hz3a-={i{` ̵``^A8=)A ݡ7$)z9|*@3 (i '߆@Zʡ\]TG?XZ~R6mWHJ"w%޵6rۿBM.ߪ!EQ)Q䈤lZtOWusJ{                       Ɂ+9s*jlŵx?ƣX橶ș?Vox=Cj/jny7+'q91١11 VrzbQ9]!8!33&dKwą9|Ul#Y>pwnec%vJ\&2D|=J @ h?!|G yWFD/'z9ˉ^NrD/'z9ˉ^NrD/'z9ˉ^NrD/'z9ˉ^NrD/'z9ˉ^NrD/'z9ˉ^vS뵓[/CL5]/$Ƣi}mAy$rK\r*aKJ+;<\OZ붒}2ElЙ5-Q•Ւ)h!z;Գ,-z.aMڲZ]1 1@r F9ؐ~]BRg ǘ+" Kk&IG}[LbKe.*[9iDBcAQw47O}`CF2 /ZVj`C5wUr!oV?z֨ 2v<^~#\7z7Ճg:XK/-k"YTbRԱv 1? 6x0y2yrRu˱!h˕xUrJPlJS?"QC?9kYW7k utǻu\@sf/$aj (d I%D<`m&>Mk{2 # X-&1RC3,LR 2$HNp'J(N;K'Aט|,1ctYZ鬬xk lX IfXS&n&IrX)PzYanw6Q>aNf `G欅{Z2΁r{8^gNFj i)TLSc`Kr`.KnS&8y[w;"жq?>60ГCqLМo 0YT+ {cb̪Xg-v1z8D*%X-r ]$=`I/4ue!z;[š8o4%98 ؛h޳ҕ-@}ޏeky~8+YIiŠv+4vB׊g {p|`ϥDGoKn9|Op¶g cgO+#L5\&"t[ jvMjLZNӺ=@*r|^t e+kQƇ\T 5'{%8j!ͿraU"E(QŖ־r Ld-A7 G烜' 3ďeZް|ꍆe;VY3o0Dw`x1̆mJA_a9aNZFO&6-ľ.K/˳q0PWr` դP_q49Ikt2;Ws>ޯ؛ژ7-BK}Io:S3֩{)evyh #yonJ +t7TJwSn*Mt7JJۙs& bQ׆ikP}Id*rB,_Ó;PdZۢ+~-wd9(}58j8k :3NmxrI! }QϕymMX r;s/I0>PuA%olT`!Uo4E SoԱ 6A[imݳՃb=SsYF*U"b9b,FQ[ul{n,t@l~!]hex3˼x ^@>?j >sq`m"HmOI RцkrTŨhckhc+-'7Eg"\\IY=례&{u#>ބx8eJ `7V BM1T]\(t֨DQu+;bb m-p!X‹S2:T I]ƒ&fh&gյHWyt:q`RJc%Sϸ5}ĥm]ũI-Pwv]_׻wBU[o^͋oS?Xj{cM{R oӟG+CY'F)y >uϿ59^O%e8~.E'']{Rt9 n,B̽ze!zwH@kwi ܏qGgsdR2?2"0wԄ:CaDm,m.Sj^d?{'r }7F!M3_Ѣ/BTW-ɃV|9qg gkJ)w$'v=74?$N}:\]\jqɂ۳=p&q `Fs-j惎 V:-U|<|@{Yۭ1fXy'w5Jov@ؾUU-vs6OYћ2oUU svlqVmqFTg%s!) : F7XzF8 QZw$ +< kiΘ<2.HT,gXJ>ns7Uq3x%6s~w:`KbKc^"/bh}zمTAjsRL7DrX9=_lMŮŮβuA')> 6BX.cҳ" A.'=ne.vUqͪ97<5>:{Jtq$t't6ѭA9^cU+ kpTl1 +^gG ,;rUrSUS|K&碝9\ADEm!AnU&l,Z TW m5\5(RTΚ889q<:dōArcjD6'߿|טf">4k>f38p ZՅ1 Wٮ<[ -:HnjGé)JUͳܿD:hGtL^ymTZޟ~*?aM`3O9QjU#YnKI͊gyf ؂uܯѻG=0  K^oV<>$cn@ux]c|w4xl;@ M_ x~cYYLezoWhxS2#CwvP zq*.s!)%7㞁ZVk03WDYfSfh15*K&h j+)EϘŠ<6jU0Vru&aWy Cʃ՝$үN [䒿|?K g\^V# 6RHdm]Z6^'P>r2UrŴO<N77ѵhpSN<9Xe:JW`9EJ[a%R%AZ!!X^b-Ф$vJΘX$ ^vg5gM?;<ˆ.ICb9ׂxd`a IZ0J$ I%D1d&\KZ6Q1O0,cm":HE!FHU+.TwxZAt98;CiAט|,1ctYZ=${^ks{}QvLI.Z.-XtKE-]j.wiv]L;'3Ya}-nL12 )d<ٝr!k c@ t@ Fkuƥ%?tQ?6oA娭^ײe-k|Tb0FW^T(THBZC$6șkrbch V]'iqMڞA kF(pŮbܡCD$IWq/{wXLET~4\QwSvAp*TbՊ])/Gg1st~ū7-:7X۷sbkV6LZ++v&W&pgA ToUL#%,qì("L*?0U{M\r%?4d]ƛHوRVKhx gՖ^|@fd&vf E5UM&ߺPJB#. ?;G?h3\*~j99O8a G?sZJFydezBX=LսU:z:.7מH"–%s>e@|DŽHzO!^YrJN@U6Xg+09v]!+sr|NeWl;鏀 8"D&`J S jANpEJ847shY  kG!ziD6i>g~47tɏԤH= Zr9jzNsZz3N74cGz_Fq/^Ca]>I澉 i' Z3yxVBdIWo{ٓ0},Y%XP,ZusĴ #7Ưo~|ߛG]h `d2qEqBcb,[Rr&gczn}Qy~s8ugU\7]qTb i$ݞb$?#~8sm^r'Og2U Xog}5~yH?O&ȤKͷ9ALGA76܁} ;!kϰ|iKV_UcB]uQg4Vښ8 _j{yz%D=gv:z0= I;T=bwI1l~;e=~3 Ub*Ije.~K=1OY1/|XeOfjf+NZ25+|xڊK]W K誁t@4HW :7Bq].:{Aߜn[e]z6~> Nz d !䴡Ϸ;I< yǯ@;{K ;kK »"""PRBZ,ړ+e[bjVX<B]Z~ߥwi]Z~ߥwi]Z~gbcyz6e[gO?MJŽTP;Мx#圣¥ H(:"̵祹6#ZKxkEf#M8Ads{fʽbZ60J/8qDjA$P! 0ds`vmy{-*5lҵëj}I&`ujhME[vaİJN`1K AVb &Zkt`A4c0`*6=(Cn"D cq(1RYjKK(ZՋϮj/ף Vji%} j Χ;JY}LBSաz}dYQ?Dv6K2`LSfF*ZA5RjB$N^,B54 h2gyw DT+$NgBQMTh!JQ<',I*C1gȸw~8 % +=Y&E7Bw-0"񈥚ccéxWTSSiA"B$,8 ('<S *OBJASI4 U+Hܮ/"!",Ø9zXh2Ts΍6`W0~e0v1c|[ň% WEzL~˒;?̒οe~@ֿ&g}XKRa'نn{5Sq})ԬYT1D|==t9L6=N/ ^^?ƮT-wld <r?Izļc0C 6ZPw~= |O_h>lMikDLA'ᔒ(S׏zWN?Y<-β7=׃Ԏ{A7`tC, oEfHoי$y84"B2{S0KWcxu'8!+e iTsW~wɡ4* [ 2o\7o?Xs:'p3 b2.?uUE濕YE.{|jk?Yz4ʮ(VPȒ ?%KBwL9)1 >vo9Y6]M3?G RcK`uH.o HHR.94M_>S'ꚯ7;Ysڟ7Z{۪v݅jEo@ Ak;"0`yΩIP8i5J9#R$w$"JFqRey'H|MzQ8e EG%t{.JەOw+ 5*J.C$wmY1uKuE1f6c IqCJ7XUwlQDJtVUWF%p6vїN9Q3.GFQ97z@`wh&FϨ>Q %B)z?BSq GT@i4%*hLȎwd}a'op0qR @P4q|ЊٗX6(SGFAZpVni8:)B *Ú# Kie0\0PҠ!!  Lc,-g.ɝ5ay|Yf#| (B=ahoǶvJNqɶ;Λx\*SJ=h.4 F´2 K]Lk47.N9$ "z4ED܁xm!r#3D4ΉzfP4T4 +&-ih0B\1 A*!lhYϲ gG=;Yj񴙱{nJ] {Cw]pD``/ l4)9Ukw/,Ry`#2A’XE)!d`VZb%Sw pV6!N⤵2C-0 qigCBp=4탃-nD@rĘG85`8*&8l;"]ܴ3m2x!/e̺q ,lDoA0ÉȈAeD`l8Eȡ).DI=5JkiCH`{NWj0D*+[$>A=5#k%!Y;1 zUSVCHQ?RHEAP>2C 1oP\[eɸ75o).2ԞYoX ĩO'Ϩf:YS/ ?[~ކ'M{ h(R/h[SB~f?gp i`MR7X޲|Ӽoo} a`X!BDJFaW 7odWT`Ip(VFL\FFz)logśߟB81BFc c  "ѱ1L \j@h*O"!~Lc"%a@o>|C47/45|WbW/-p1+o" <)7vO OvV4JVh,qXt=2vz'^[YƇF;1m3\A3\H(/4DKn7)>z;4l_"/v)Ցjowi.oQCQY{}}P¢wG9 V?F_~]Tor[K 8-?'#\](gfII ΁Qgk׽5->sqA|cYAE]XhԔi*.>C#9'Z6\xM3A\;C}4:psI{UQj Ł8yx,E3+C#yε!6h<' (?Ńz"Q_:/3Vh} e6F*0%bf^ MQ@Zz)A,QHvKuꢴGS28( /c IIu8@}ЗD*t4訩Ӂ 쵖ZaApp+vs "C.^I^632q:hΜYm̋vʡ I Cq|zOO[bj)u#3׳<פLQ_p)UNy'da`Ita0 j+V[X;@I}fԧιù$VzbUA(\29ў8 N^Wؙ;iɝ_~6 STn>)d2uu2K}q5sh12qO XjqS45 \Y]"&tH~!q9?W<3}KƏEε=b8u!h^ziHPgonSdnhmf);S,yzaؕ٭oU >P`z!9V=aVwgq8w]=q~uJtLж ?`: m^yܚM:X״M=*?rsLuleUw>=25޷kS/G 2Bm40cfJ +?.a5BkV-ΘLnGCfA^8Z}FB )[80/ǮFzI;7up#cH_2NG{wrW|~,ԫf[#yfIHڦ|-z'Ioc׳4z1$=n^˖Z3 aW v kca )[Nuq4D7^S$(_!'JUr>3jf>RC]BKJZ?l$Jt\"۾kl)l4׏i#JՉқyh3v}ѰyV/N|y( 6lj>MY}j_/aN( t@~Ze:=iyҿUa$\@҇2)dCOo.ò3m0C_Φs3^n svsý>&m??_&,ʹ|_!"c;)X^̓X'I/D%5;.Q}lɕbvvKF,wS_ÒF+-bAH#8`8w~ Pe\dVע‡ә, *;VpqF9DŽRj<[֑Xq[R.k%OFr{0}$'4ZZ.|6Qk/UX/dժիi K5GY-7n<\"v.lt'lh:u`v)UY÷W*?kS5ےtPtnC]/h;8 & ·lBѻ) =Chm`eg5:3D~ |6OԐx7'ASMwM,8&ha>Da"OgZ2z ysv5*54T֮C97T3cp6đ>^j>]kFej4G7a>_Xt>J.c"'Ra<#}"ܡj/.06U~ #ҨgF ։o&O{}xK1Bf{t¸s$*P^ަz>sqA2DdnYA E]XHZSpC!Z~_lXӫTiQvoW#3Ӎ}v5u59B`"N0^v%X g8x+y@kBV(0F2 n)p.\ ʛEy[tkb}0Up =vo^(Q`7l2 e6xzj4X >FT5.fs e@8#CȨL1Ke{0<>y#8#Cb@AJxCW(bC5  XBI*SM6^{S=ERc@(đ a&WU*Mc`RxةDJ#;RLfW rzINAwhT|lz#G< 7`hb&%0/-r3*cLӬut:6!`BLJ;f2Rʃ!oA@8z<ˬ+ن]2hh&bEAb$6hPreKŘb[Rix`,(U2xi23bҭw;vSV(&[EitJbk$D9DbJQ-  ԭf]naRÏﹷڧz ]#0=BS ìg= '=^N5` ,8O\ &Gpn6Mm: 8zgnZf(WmCc+>ևsGQ߼לyNlQ5񰪍oPCQ+fI7ԟ vA,k2_~u9 fpP  `@/\a1v  (^ m\]7cx5}.NB[{Z$=*'M~Z}`nTW? Dz@-*DͰ-[x.2=PdJZi;f{ڛېtn?1)c?t}+\sZL@/ \X.riUH:ZRW(#t+8Q[M*Ffd j0DVt8NԮYһ{7_ްem5+V~ݠ4ˠ_M:PoנhdwQk %.P!`V0/2ZDJG.Dj,Ұȅ4*Tpu4HP&$ZPB8`#6H 7)N:{C:FHH5pFBb%#1))DXҠ8KFbҊ6tlMҲӴN:Mۧj ۃR̖Ww6QFU"yX&bqN}?ȗ}5!O23W"AHVccq#FE>22 }o[Z'GƑ1A2' 9ƒKJ1YyRc, a$-hYE9L;a5eT[A`e#Z[n c @L9 "ޱɀ ٙPfnPZt;"E Rr\R03,*͸J&▀EO>}{>L Yۈ?Ǝ+,zcp)'FN4b4i3$+B}НD;bc#&EJq Q9bdjU[UU[+Ոٯ N!P*WH\k*?R:?R/+0eտd<}DRɨƟk.~?O,E'.LSf苫p'74JLR=̎9uڧ\)FQyz,;jRW;-"gSn5e.¤}E!q]glָ(Ƹ_=I[gDL5{ >]U͓V$˦oT7 n2!UCS%LUA,$)Y;S KG7NobNMUk;?&O@͹J`؎! i+IVvd"w@y#JN$ gבI,yo$7&"o"mɠJEF< w?eD#ɗMV u %W|^: MnquoA QM[eZe5H3eRgpyY eAv.z/*M=غd՟Uoni^9z,2kY^\ X %)R1;aݔY}ul6S„ D[bh\+R[DZH9}mn'\͎vS#dA ~.OLKs~q?YX!(!S&d}֗jcgqI*3rZ@_#IxqX5l=mMv[CYPP4 0$:D1\F"g [iJǴB TS =[!pX-"ʽC=J ÌМ hl,֝-k|L4ykra}?rWf֪-M֋1Ky1h_ ë«JK5ë$I| }M3D1Ai1ԃɹa¯_ץQ洴 c?s}3%/d\3'/{&K0sZ ~--XI'RsJ;vhy;_kn>|TbŒ9(Ҋ0QGDxБ< LjY" :⣗|_žە]ǁ <4Qe <#=sIZ_F'[⡷:03\2Ll)TI,u<+ǏQE1i#TicidUXGN#IRa5Y.#2QLIpp^.V@i6(҄Hƀ8I1il[4cs⭐GhϽo\S|[JK CHMpmԴHBY;Ⱦ@#"C#~qT`+e@1GH;hSފ7z ;}`4"̵PX e1L̊T BY$b09ŝܡ;rA'uq A G봳!!J$BF$Gy䌣\PLuqT|í^6n ʙ6X2f]8kAK6 DyvNˈ@S}Z1]&"Ǧ|lIS:$ y+$pe CEԣ=ҍn5%!ֲv1 z[=Κul@"-.Hfy):[lT̜և3(32Ӊ-j˶oPCmuds%N <묫7qѠ?TqkVG˻is$rFBB,ZZJL)0"~ v ,${R_=󛵿D2ba(*=ђBb N֩H ׁ/1/usnmb^pai0 !ÁP4ьd:E8FD.zyK]0)b ~[ޫC-kزM}ۇca}0fj0):r3iΆ#k^0EN93XZejjsy5l^ K F,wSS{<ˀX%6Zim lXK\F~JTQ( cQ0A[R.k%)$kè3pFR,'nG5Ni5N:ߣ}>#h gM#>qM[~-p- Q;3HZʱ,'RƜysŢ-)zfPPo(} Xrƒg6Zd[Ӹvht ΤNɘ` rԲN-ău928;9%Q==l,Y6%&ЃnͧEm uoD<_q^=X+=;< <h4lG:`1VY"A`(3*hbQ9b0XtLOPL:3psZ;t<ǦW}CbOQ8޳2ˑkwn6Ƃ[mPZBz]~^lJa^iGjd>9HysQ n(>cmr]*?3`[ }!mRxQ0p_RA#Pb̲O )Ϯߛ\QCq:o$\szjyƠm8JE_5\-s5Ece[ZGwniXVE/0I90D0,*&ȍ) oGޤ3LS͔;i"(2zLf p2rD0{Ha#<;+ ?Чc#+S<$ywz:OyT |l|&qLLE>=20v`j/9v3|%gfxЄ/#WRb+}v&G Gu<ᙅTioRɤ"\ vFUX%x|&p :* ZO k +ª&,>qJE\%h^!)!K` r-#$[simkCX3((dQ?ɾy?):+7L{&1q7Ta_vPQeM&߇_,3*.~]%_t^gP.:_{-bk.j3D7!)4Q(ޑ͵vR]Ķ.Qru!TCfwXP=l9KT<=~[6Ct{0D=\V@o[t=,NhC0m+v! S|E{ϰI1FRHdhN&R+U po!m(^܂ї2JsvX#<y Z}͝r9Oyzc*鋫:q=N\JPRҋW(E/}aڂKntsO,&3* kNQ-t|MV,99 V KM~>>Y8DQǍbRΌ!bCPn ,l< m^B}^v 4StK/~I\]LבK:f ( Be\UV`$me"cLX],ZVA}V˕mChAyr `ƓK*r.\OĨ^' JqgF\%p =qU5+F0W&npi"/on / v`a*2F<iN AoZg6s(lP:" a4`%a\@[aBc>XhHx"ax2,$JV@@xpTDv1,043\}u_}'w3j/z~;;~0rI*U]M~O<8?t!Ue_|vWf:=hrF xU%`r+0k5f,`f2bhj4sU? onQwATV|::|g:Į`5: /iaO> HFRX-u3phIPӤWkts]g9к[ Gj|C͍wmu6O 6֋YT_g݁&E\jrqmH)=-I{`k%.T*_t8GkQJJH1\b飶ěU`̙ ?\ZrWQS-tbQ+ag_gn>5#!7s*ˀ7 j-$* %^Fґ0хT]Ȁ}AY1)f`;$0 rt-0!|a#6ytab= Brȭ28f攰C&LQz SOFXS8q)_Z'GƑ1A2' |c%%˜tq)5ʒ@F҂o-4_ (WKM`|"O1jvQRk ΉWyqHgL|$ϽE^B#{h$H. /+4J./owÛ_ܫMϐָ?\5,˃'PՐ˨&#RIo 60_g% vIfВ Ȥ-<& FERek&/U +R")9IELRafXTq 0B'LxʏA k7ϧ𴘕Yې?Ǝ+,` :HveF+B}=S=^5=jjHH"GQ:|"@H5ӁjjSzFꖑTK V= E5Bk(t%)cTY\K8oxZKpTX ~pn>2\~Ro0]p0(݉euXJvu2!;!ataTݟH͊'XjLuլxEfF>jQil0rb/1`USے$튪|'IS,hۊ:&ebDi0exH<8\C ;&.IqMI&ƅ׀ysSy ڵ`^O]z+8ٚ`יa6wC1~q>ܼ>8~=z5x2(bV$mȤ~ ܽ[! wm8:Q0O ,홗&A'NN0})_D$QsseU)~>w˅5J $߄__&dnk?5% |<kn-%yV_y:nU\U׉<柏>͊V9<}秦b:bkYdc_O |SFtih鸅եA ljiO,}1i)cI鲟"Ȓ0|N0]VRIԴ%Uy OtwrӨ$g$dqV5_|pzdžUr/ux7 2N+z擣78cD-`A#$oh]#6VqQsۡݠaSN "E+z`@& * \dQ1ԣm͓|T2+3w[d"6,u9tζ֑[]İ5CFu4|SbSOx>z+ <,ʲSa5,u jOj!Nj\ds9JQ2$1DJF[ FjK/Qi@^WNe:4}&[ee0d[PWCU=X Q`Ƥ ӈ^?9JdN@ h NSH.TA*WM7_o $GKc0I.*U"Z%/GZ$Tפw^$CBݶ+}WBDCB]&,( jI\2JQǂ&9’ON1.j:ۑtB D T6Ӹj3Mr:kA]轫G?{6} =1Ult-9ذMz NJdGY?g߿T"4qxk"Q7mo ߡޑ"ǛC=T95bjcj2`Z҆ LNZ;?89ƛ3:Xg,peP0ji5YSZ"3k=,0q9z2YCWryhZv{ ٫ #8_ŸzU{/j XWўCSё!I*:Zaֲ7]oNRoE$䵦tvUH4%iI6rJRTx-|- B`zn)?Lķt%5jшͧ0<#:F3z`*I)F3bvߤl Nwt9-Hu7fBRx) {^~ihJp7zz9)UMsY$ ܠr2vS+Qd2LOQxSXvB%c]'tZ5s Z.7٠@P(jw' b@/_壬 rzqhlO^゛s\VgqI]肀4gxI;>u\y=|s;b 6(4O uq,pS ۹=_khv2EX.)%wX$#BuERk"Qm-OPy鼘yJy=k18nmŹHZw7s2.qhwc 48x%A$0.pt)@E y*>DRBfiptJ@HvYiAHkH"KZ( o]QtkZx|{{_*m={k%l#|CqdzH3TgQӨ,8(/ $PsYȧ:8Iujf2tv˜9  s Z l$ ARS۔4Km$iLkDcͼd(6/[((3Z͔X$MS-s-JYƩ(uvW0}+N4+6:F7fnip zK#7CuϷ85d:?UuˉSh̥PkQ4kM]iwx|xG)/g?.@ڇrU0[hQVY +.-_U~{o_ZNJL>%4&ɮd9FR ಖ NZ͏1ʎ<'m<}ˆqq^kٳOo&_pKdHу9 spFh9@ [j?(MӕG]}W3uبCGԋ?=(vV}vHDtzo#iJτk/y.x5ɂF; =p8avs$C3{>__c`sL6ZGwץ"OD %ӊ',h[,,Q+^iD .=P+D!"!.sEQ!bp.IcAaS>`x1hU.Bu}IeO|:u.!Ag?.U-=7:G]%Q=ٳhgu׀9{Jjz_9Տ>{P[Ԓ}ChJ;:+~G- uVuNz?⼷-#x4|~&'cO+)|*0vc}k] l])NVpuqG߮G\61WT+[t%wB *.'+R?)е'Mh閁07.:l<~ta<}|1 :`&1*^֑h!9+gP.k+!E4`1$tv Ye[ BL$Aɳ ~ 0ạ:` huR'T  !l*p:8 TLZ`lKy@ڀiDڻF@zy^E=Uqhqw|Ϲc}H8V7^}! fuÑJ̃89LK[]z=KkS,SNX:vG$CFKe{J!>턩FZ6fE޲؁؞x;lmI|QHdl2!; ;otieC(C19/ FH"%B|a#iʰ*9KDƁ bQkmx6&^WI^$h/l9{t= U.*eWH@]ulZ mˮ헨նM\>lcytp˨.).E%Y`e0l3h`_kA*В"h]PO`) _|EƷӚPCYED٬38TU'`ȣcȁ6'(|lo4O pl#RAiOzK1LpQ r!,kVRsJEEԙR~Vi iC)I#dX6/N!h]g8gV%y$Q$QP64Xkcb/W @ٜ;v+ @%GtjXLkjZw)/inzq~UcߣUii'7yN~[Y痚A !U}}`jvy3ߝsl9:ՉQ;fji*蒌G#(P!i!#s#U "hsh4noNLquE%K9|WVք]6Cq+pIG'`K8TCq|/)IQRΈH5* 9)٘5[5 z|A'ѷt ]~e KĄXEG*%lm-9N.   K H9Dn:s.$JIDY6x%B#,eHK*"M_L?*t%UN$]4*_!41rAJG} uOŁ(NDV2ddv&j!uX#U m`[Q!xp pVl`ծ{;cV36䂦ݢ[v혤/ 4yrRɃح:jvt$voΎ55"RVH;v)j*[vE3' NhT ۠I\T@j?)Wkj/z#H8_.Rr@&Ed.[;.LʺJfJ6 DY^ϼReRh Bzkr:^I܋E[Ĭ^z77[:hW0x6f J"!2 vczȨkȞHQSێ5Qq@n3v JCBkatH-'%qLGPi-"#9"gAz!YD\UV^avWXwȊAX0gƬEXx5Κ-gO^}]ZAOe)\ YɲR,(?2:ԠD1$si@K,P{cg`(B-JWw(eiV* 0rk 4;@Q-?i05V%(HXkRIitȎr&F$Atb`pPmIU6p;Xeiڀbg/C6# {Q]섒EDxm&=_uf"rjۧCRzb$!d R2 k䄱źȱ~b <がfRt@ctG N'Bg!]HUTDf,d#%h'&g]8m{]tԁT7Ա@Cak^T5 MޠRYJfO@Xm(0Ȍ^m{\1Qa@7zmV)RgHy2˘a_b|M >&` m`BҘ6[s|Jie4$ϳYZy{7KڅJ7vS>hEكm! Qre 0QLDdwW{ЛdfLM4Siu.Iȡ>Oc?{F`cq7>?$X5q/-RYMf_*8~w9qɯX(PQ'kA#r` %JY'5([VP9$0[m7c٣Hdו/ ,g|7;4-HvQg_-kW!WCߦVo1)*@O ]6fIF:(RJrI&%j1e8a(ieMP(]$ҪT T?d ;[nk~cjtC} +wv|-ꮫm&O\.{= s2`<0<[puvd` xcPl[1côE^-N5O-Y&5mO/ldm[踊A ~5Mf6hSrk!Am"Q`G#H*i>68-ߒ6?;:=ym[n̔끗0[;V0'.·]M8cIp3[@l\x%E4Y֮lAT'NǤ8c՘)yZS<[vRc]т kq6\6<OŢÎ(p 6py܋` -ɨD u 脗H":>ʐZP")#/g*͟LVT󽨼!m:y]uܢ ])ԢSʠ#UyR~*.I+A}r&y)DA!N7ZpIR&I$elcJm\-nKtR2Ѣ uXJߑec(妤M#|qCN_RdU:Kkl9Ou-tJ%/# c(?Kǡ>^8ͩelA hU rQi tʹAC(U-hgs}-erRRoc1֞SЩ_2ڴFc#4e9r]otJZs|^\,eCMwtPJ]}]TmמݙMͺנeV&$-r>At%!Q CT̺d2 )#+% :$f ’tN8:`m (Zwmg8\_ʶ[| }vзSï<#yJݾ>שtRϑW1Бb]t!?{FO ΢} r} YgYFQKzm%fW]^ZVc; (ȆN[8pg1q/#J&v ԭd%גir UqMɧdt )aP\>ǕJ\l2#@NSUƝْ9N}[`LJ.,F2y1]눰{n~O76Yrn݅qB6>wF_>owPsqsi x{vSV|jsZC@M73a4}7q7q7q7ˈZ!KAY&S ĤA D"d鉬*3l}B|ÒsNܔM "y |L{|wl6=(GONiged,-EUtz鍷-.1j&D#Egqߵ3 Nt۞ߜߘZT.V?2乷)Rݷy~GƶeF'iիv&{WK+ ի\NﮊTۺ]r5̈ rW$0bUW\**uHiXޡx vӈVɩNH 5`o6`0EZMbnh$%Y)/ۂ삼tزE\З⥅uL=,RZ{w%. T5\*Zv)H{IYρߣR CBr}_*>m']}ܪ6?:Xf\9{zVJg"wQc !,@83FmY)9c- ,rĢQKu'TVi9\А0I3e4AHK(/JX- "btJA,9`(D ab'*yK93̓ɊAd#$xϼU̢+5s ߂/49A3x4rJܳu(hhFUYzJ!=м%$mK@ɢ%MuC u9Fdle`JgN]x)|7qr# L>AYQ*"`4 ިFiϵlDC'_ !cB_7L\,#S0L h/)qh?qZl%=ϵ8v3Ekf8i|N,K.2r[=>*Ċ͔-e%BO -4N* ƙ;9RwNg-([T{n:ՑS̊KQrFsșe 6VmzNs-}J ߌ_')T nox>|TR퉙x| -&3_>`3}?g{JO(NEލf`B %!*a*e! 9eI%y gW"y2Cۓv  `kc=i"4_ޥjH>SۛXx6n=2|sEUy5OCӁ>P1ު,nz9K9XȲgZmD8,^oLziшͭe%*;/8N.a[ 7-îNlXQ(zx7v픭tPRk曰Y@XBVmkb ?y{srKGཧ[em`kwD<0I%~ۿ;bAc4 XiEhGk$jћga s>s+)[kbvE*5,wcQ{a2o;cJMF)xv~dWN]Y!j}튾[PO#߯ow>^.В':tpNTA6H͍V67 6k#rD={i1@4[-tBawP ,.zٻ/U)xePU*XWS8'dzR`m x ۮfz6]_U) h9w'Q 7D`1' QkEH2{Ȭ3:AU  I,s򐜎ȇC( 'ϭ-Pj5قW7`Çq:1b/Ĺ&DSLH^31Kկy޾&s$ %x(4*kY΍0{ g1GwQF (bfYLˈI,]2uk(rɵdCUiS)0$HJq$":}&P(ٕqg5qdSXx ~c޿nihv:$ń@̍M&snnw~xOlFon}]rh: E 93>;֧22df:b~v)"3w.Nny_4F #C!ݜO:OLcO0q]¨W$a\gC؝emx}|{Z0 ̭֘KjMYvӚĵ^Jk"ޚHoM[SƜbv8g?7mk]ڠ+j\ѡϾJ=y晇>8{U ^X%2jKu>iS|DSKTu9b>;"z0ņ!Ťi q}`+lIyPl' >`uG+cHр2dU~rs:[븆 P?{WF 1O3@GDfYb]öSLj['HJKr\lIRVdVDqk FRh9KD`y@r'Ok6)+tH1J=3F B "IUJnb[k !EĖ `6呿{?^q-dA+l2`2h "cfᵮ2ǐ|/ZfB]/YmRd3/+kQT bPȱeȲv`Ȟ!6'Y,vIRyl:<p{̣ +[Xa)#4%F \[ރetU,x_?$1z? OFߚ5#HcњTĒpH,'. װvN:Wv1 u+HiCZQ$C6J^zcL!S1N-"RoCnD>罇n.l9U VwhZ4K3Ug׃/?VyZ|lQ.dDJIGcr  hXHzD^;WJB(.y`KXmkW8,Վ\1IQRDe J>'%.9AҼo݉`G:(J&Bܯ\d@,0eQ 8(|6KdY6>8H݅< $iXr.l$JmVI=i B'}>uU s:DI *MD/ }q*'_0Ihr1_DҮxRH~:UʂʐU*!q/c'$ckS>tCR[A;H@'PzA 1 zq؄L 6=_[8Ie:txu/`r1)!i]"Ye+GYTפw^$LݩW*Y#d]-M- }5"mC<o?>}\?~أ#G 'gTݹ4sM*$?`S+fv:-nK|Wtud;0dݼldpMP*yި{7p3fJPg1&5d%HePtoP;GVs{o7[Gv=G_TXyS)XUtLcZ xi-:gi3Sm70KKԍ2`ЮP t݌"w^LXzd>aUS?7ק>µ'Bvוl۪ βYl(HSC>'TEl/;c2/"bpzJb4y&g{2R9/ Td)d2:R,L ud6D̈,I$!'*T-"! Eq1c!??U1C`J&i`jW|=iO;onN%K3>ye*"KUKG][MbCFpwY+C.>1R%kl9>1;7x=lEG`* j}AGA9%MNGd6Đ-)DuTTzr2iSqA%d^J_XIu;#g((3x.l}$B7Յչ;Tz݁ӻi\_ ߆7;؝4!])OJVUic` Y-l rETCgJEDK]dO-gmz=a.$>B 2 &Sٲ0%?ʊak!;C*Dd *[ﻐg^z{G2HZuFcNTRd##qV{+!0.dlÁ@/ivBq/~yqVEΏڨ="p#8XT?D &Z8D &םOu(>D!L&j#+P$œ:uꊩTWN3V :o?Fv? m^OW4f~=hA>0x2o ƝgUOƑY$.%+=m@<2]v6bsC9o_vV26m0"EљFMA5DߓȂȂ²eF[kbH:Hc(-@-R)(e6#Ɣ; шRiqVp8(Q(Eu9O)UdyeAS:fʜy;-Re|G G*²~iZ%f4m'[z6ݶ^.4,?AXOӅ}=8NJo0. S5t S8"#il/#"rIZ/q֗QiN&_`j}'>EǏ|Yy?|VXR6hQ֖H AqMM-7ʕϕ}/>#3"-* JJI9aCʣIF==^] )]$| [bV'c@Se %}ɲBQLrFyPSMJDA:62;#gIS1U?g낈n'r^B uԶO߳[s*&JҸ \`S1 I2*d]|BF"m+' DLg uH2JZ0YJW][w ǓxV67ln@hvEm-L0%co\|I*u#Ց&}`h@*Kb9IX:b&H$MYzZ9Ģn-HRKY1t;߬БyQ`5vtæBcVfw ؅"6OK ]GD &R1c׻ۇMw zvKfxf; tݶF~yc=F^s;?ToxO;uUx.1vyqMޕYowϚ/oݷaMv9_]QZ˭͏d,fK\{:Ё 24$ 5nV1R(?e~ghV(PJɮqERFB lJV 6:5OML lfO&%jGfKjP+^"A"*]GvFnuy} G_7@fYgOD5:jѽr~>n|h6UÝiBbA,f! 2=y 4ނm)#Zo ^x;z{'=Hi WoajPPƤkmoɾ1+-Ioҍu;~j/D"4NQj@8D"1;{;}Raz:9x};=wįk O6HmJ{Hu$HEX.=9 XWD xgSiK 8]J~Hu;Wzo l׃'+6N tn=:ҒB#4bdU_xX RL:T 0Cc2*0dO*&yC>O>մkm RFb $H,h )9PZ $[ۋ5tHCQ}QgvV:mjC~ro}6H9! f͕:"eL<@iUrѝKJ[W"eLR/R)ovM)/XEߚ0\w`4cP[)7YBi  ) k)Jo4suGLN|6`Qch!aNɬK2t)D4#KO9}{Ȏ\O 2m n6X {|zeGgSldY#ٲܲd<,n֩:U,V,q`an|pCt -2ySYbzGLXs>RLL01(x!^)!J>v1HWi|uS=VvSlqM_Wl屯5W9iDC.6eYkXF YSEYJ=kQ!#9R䘢u}6!V0fY *i@Oˊw1k8cJ#Q$JsE.@I%5$qJP۲ PVVvHi=)x o"cY5H0 ?y!(ire3qK=O)y 2P^76r9:Y08w\s,3Lmɏk (3J\߬\K B9wYf-{iO$)=)@!.%.76w IHHѲ=/639vze=+~H!,Uڶud>ޭOK<ғ|U]_oo,OSmpmtHEg1Pz" ڲ 1E' ̑):фd ECԥGA JBk 96脬 GXMځ!)G=$\<E{i-a4N xl"iR6Y)& b)9+4C 1&[(?j^4$!dY:‰"ALtMf6ڬFK΄U }RNצt  BJŘPЄ:w>xbVяrƼ9DHs[Fq9#wv?A%@i J4tx9.AIǿ̮uFN?N{X<8QuXliq5ݵ[gW70_I(`@i|~P/5~5yAb~.\qrD?@v[?gQuS,ї4\o+&so.|-4q!=2xV6xJ0B!9a?_~kz5>Dn!H h0 7_\/Nq / lx-C]~^ٛ37qg7R+MʏΦ32sWgidFtGsKޥR,u`ڤ%Ǚp=a?HY+\}^N_]$V/Eo\k*#yժ^\<*ųѴGuޭn>ϕ[Cy2-U7$peq|9Ǵ})mwU% l;PCUF2sR.ѴhLOfٹlE.ȵVG{Q_p# qt%@#RyIHjPF3O'%Dq alB3{W3׎enpqĸh1) yF#olh>j'bn3Adɸ+hȴ<ۺVݷߗ.Ћb/[h*e%`ir ٸ',sыH-xGj]!R=}n[;ʕV6dMW.5wOZ<ٓ`)vʎĎ8P<0% W%VSWF_}.yB(dwgn'{=emxCfPA r H*EVBl< qBYfml)lbFtY^.%/l9A+Ø!O*wLV2 ~W%]#rR^O[,"\{Jt_١}S ^d_9mmbAtԙTn)]V4V!dRKvDRYLJcyC%KUWܦNՇ7ΌNrLOgHY:e.Gpi2X2Azřcف e9q˔.[+&iM#, Yd&Q'`n=q8m8;vdyDPSIy9$Aa TN\K!EŽ6@$9".2ᴎ}B{NQh"M#sI\+uIaD<'FY9 iﰕjFlT'bK6B+NNų6xR2=9nB;@JYp"eTя8н;2]tP :|iX/G!%wd$;F[R0xh/"Bc+%4[C}2zS.h"8c>01 8Q$6NJg&HiO$<£?x,IMBEAd^ m{k:iiZAqHEBB͍;Rn0+2v6K59FGc9?ݎ!B]t`^,6(,;} .Ӭ/Εz!3ōtr[]+H"a9e.qcVFGfơԚbţ^uiL,W];M,\L'&$%6d&Y2I>$?4˙P :bniGcE3 uf=reNhM 6^zKp% ڼ򱗳kVRAσ81r1ethdiG  Zʪ~/HH0uߑ 'D߁ Y2ψZ,}#{mҸd8M[\seiiImkr$Ć4> Ճy>y bԵ\+{YoN eq<˝)XH>0uH4ocA[oȾ`7ɪ̽H2ʡSV s请찳tOiC/B"-|x ,^.׶ew~j}߾6RζUSCyG Yk4Ww^khr5H+a} 7Ny2@՞8iLZ׻s-t}82ġCNœ ft}yYbw\kaӛK6>ٗ8}n1Edۓn7 ׇч PxJ!B̘2p#)e b*P$g q"Hkۭsb~n8{@T߯+}  fv8`J^lqt;__^vk7:ed^(T_ӻ̊.9IpTu,X֨K/VBI'%k/Dv]} uw$deM/;l,|Zt)`>".MmT cV\.[]g]WGV+ъ,?4L[DzWՇ)A=ˤMҁ?c=Y>t\twͅӲ &LIQdCC0rX !Κ2p,U0s얘2yT7سj[4UJGtY,5Odz]S폓j\ aPG2`oTo;Re(ȥT(U;JՎRT(U+׎RTY(U;JՎRT(U;JՎRTjGQvjGQvjGQvjn.&Rv/mB@[&Զ mwm#Ik~? wlps,؛ GcDINYwjReE/˔%;m LRfW6M(mJۄ6FZ&Z& mBiP&{iP&)EJۄ6+mJۄ˜1Kۄ60EiP& mBrmBiP& mBiP& E mBiP& mBiP& %~joֶPK#C7 0eծZCQK'U53_{YMU ^p$~B;ErGeP>ɺODRKHUGiN 2Yc'6)5%N5Hbm\J8Ǜs $I[NN$% 5Ƅļi O]Φ-Jcʉܱ4GZm{Ȯ _ }=`{oB cC+>.01r!ՠ_u*)juE$6" jqAXF)q2JTK&!cdV;9"ycr6Ap #{)B[ρ|c夠6 /ۮHڲ7Udmm}뎝ukS`!Z 2Hr6 c !j(0!Jiu )KeƣKq.Q\8VgPx>LG-w&"]sl:9x)z|< iUٹg_P Q*EsMOj?؁3qWJКT;ocJFɋV1G9#Qh/g]3 1';ؼ׈I{듟_כKqeqׄ Si7 z|,= 8|C@Yvܛ̈_:zr*'&+hvPyps8tr}mym<8vgjZ F6LsW?onbvə_z7 /jznNw7xꇟV-DUBWᵋiXn^Q.~'3s!gK֟_?.$)8P_L-sjE H]T#uٓEj5DD#I lT d1*LNZ,jJ%*ÜMxL4)iL8/KU.zj5s>~O"88)(D%"BDv APiasWQ)SI¼0D| o4gEF%O>] ͡Plg5-cBMK|F3wXOW'D=^RzSIubjI0'2dsxdUzp{Wh))I}f:xwCi GcBM܃ ?omTDɨh%y\(gU{(J,̂Ro܋@P&xWc._PGh/Vo"7rVYwKE9</ tAAOw-UbFPsEmRs= v+-PJrJg/CRg?ƏO|>' So߼>&S"ඵ J1I5>aEaX9M mъ G֖(8Qi`{_qGi\1|wX,enw'T?}[ +6~]nh!yFhZ>$Y<N[ '3>f\x-{ 3r:[-A>!L$ asRA GX "T4y>6nx9Zu݌r¶=>3$b`ZZ6,˄ N$c h@q9@ 9(&P~l )ؠpC36Oi*e#$;.Ӹi%`9X)9J9?nlMO5mH\ĹaAHI|p{5x&mLH2.rj; `Jd$bL0BR;< 1)KGd$)M/kF∖q2\bkan/0_$JOi.|e4v| =~of˾s%?C:Lf-1Mop^K׫s&݇0}5~r_V(l18$;"FyDqJ {Kp^h9ԍFp{1nsWгQ@ّB;R ,.*m'f2,'T.">B]U4W4y=?_K(WM|s&d4I\ I9_hSiw3QsǞV5l.u0O739 )upM=iy6 eq?;~Jk>B5q5+*/y}|n%otM)wR4BTLI* bS(V$j Q2 ւn1GB6%)ĸH=shbtyY)BZ%cr>rZxDu(O 0x;3z=l" '19xY S-ΌiˍT0Ga@KU'" T@`4<0G1*TLh0S][oH+q/}}`A{eچmKx߷dY%Yjt܃M.*lͧ],(. J*lPq@6ں ߷ a}t/7j{ )+BD4:%5"1% Lɡ V=VGW&g;ZHC괇sv{ l ,{}UW*j6_-Tnsr,Б|]KuI.]KuI.e|]KuI.%$_|]KuJuI.%$_OOV$_|]KuI.;:%$_N)%1$_|]KuI.(ф|]KuI.%$_tԡ# xtrPh㺺"J12 sHW[]E‰uXTAImj53oWQ^r8ޅkG[?Gᴯh&&VQRƌƁZ1b"* NVF ^,$ S.(k]23F0Jѷ0`\ҙ@'r8a4z~uH]u}k|=|%G߯5);Hwff@ck '#Qb:5A ɎY??82Cۊm?5ucwk#rQ+}=#<`]}[0MLY+In|Vqr_/H' QrJP_07XU=U+F 4 "[m Ҋb'>a'/GnƮx2k@*!mqi&f5tȕBcdq/F?}x77'z!D!e!x (zg՘IDk1hFs+%Ů233A~^ʃϻ@ꋋP3P{@(lP:" a4`%a\ c eޭT&fQ;7 qfo$*Ű[)rRsEE%['jǡdf['{f.Y:{xV?6#zZX}taM.yCWoٝrdHMfkΟfz`Ry{o@0a3 )bq\nQ` NFY4uD7D(֢(b'ēGm7 ,3AH喌!Mk#JyYme% o" uG һvzړf@ua>'PɷD1\4R"$11| ᄔ]F80jH^,K !D%@M* %^F0ґE- 92Kllv2KPv1_U iU#pmh <A0LRAd#nQ}l~.cǞV`(QHLd$&F:ł)\6(l"LR#1s)4lN+V8혜9LXp#vjL3͟W6^x{b-ۍ&r[J3!pO,3UȺ| W [iD-U -G_Z U$`faLeҖ5)EGN#IRh^(5Ghgy iT]2pDBJAY&D4rfb@`ؙr>!Yǀ~@yRƼWQ=[7I)TJK CHMpz*i* -nD@rĘG8A5TyǠte[?H~wQNPδQƦqA~`,cօadz NXYDp|-v >`b-1.R=zjZdA 9 \Yj`lDd 06,l+d_`G'n{tRͣn~ Hq(lJ&dy8SK5{d`X!N}*룫B#j#9R-5Z8m=3{J Rs013Fe[ҏ43C_n_t#aR/1",D 2țhA #T#ۀJ7|7M8_>7W'!0n6]_9oɐv}+/ULfI)|3HE*L<Dc+Q"!%K.h6 2\Ŗ?0.w\I#6&1C?Ep=3~w/× .3 c|=\ +3Va3œcwkbYһjj'mDHWv;00 ̤msȸݠӴ<+RnHRn^$:Z>GJT# )绿7՝=%7LVI:u4zaxM ாzXl.<gQۗvwLQ)vjlR7qÃ{`sPiIY'"rE_uh|PqP(jZvǼ'|/]F%9Hp!AÄwNj(4V]W]kڠ3fM`k/9vVR1UN)bxЄ^,C"v蓴,T7|O|X^- C!f68R$#Z{=NU]Yo[+Bf0̈́;)=> 0O~jeGRS?ft];bІx%Z'ך65SɂhqysIzg-4sls2FL~/!x ~?0ݘ\(wt 1"Cqݟq/wÏ_,7Bc(u |Qg_rT#$}A>yU؟{_Kͭ˾iӿ PN*?m&|Ny}͕OʦR곏rŀת0 MӛԛU~dYAy[6Bw)*ݭS-lqDKoǩ+z蓮+4)fS)Rd4t^}8b58R<, 5x2;5SzWUWMgڰopFm Wtzixa8߬9Y%fc_nq8RI͊8E$_qDG<,>SŽqʕ) ph"Mkg >*lPe)ǒ3Li]Ue =/Z'K5_2P\lQ;*Nqlh*F"(u&DJQpLU"si=c!k/BR|n[Ms7Lu nz91 <12E1)äʥsY;DB]mG -e8i<׆,9PI!CJIPgEe=&ΎzկWfU AvQ 1)Ź4:q-9 B3o@H 9.RMj2z;}o k.i|8E L6"j.dΈ ƅ  jUZaoqR{ޛxa&0b< TksFy ܸq&#TTH "k :6͉=7WklM{ގwqA %MZs_,9/)QL'@(FoIK8D ^5m"}&REEEMsSCG@ 'kO 1TR)G[d>qM=pwl\Sv!"Qq?LsПvMHd$nM@DI;%$d%pΨlYzxsR68rkaT!$$WP%bac$Ĭ->Kc+wݞiۜ0RNXs')#zCPw dn@I@+omvF@,1xA|.TtG養(h*N102&.7]~EnǷ߅YZYZrBw;OyhV1s3!_R>|kج@Nj 29AhFߒU vLӧ_d&3ς!i d#FxG!c>@N$y4z`l[mh(9A5]㪳SKsp}aƷp#{X}}DNI7*Qf,JBuDGqFڲq:+ݔNk] O-~IXlro6Yb'z|7rJ f.`W]LE"Q^bH J JS.c|һᇛqqv#hӐ@]WXF9&@'yNhKa^P'#9ge{($r`2i rw `(YՈ{.~.E&kDdRi`Y{o/7Pkq-[s LZH|ʷv؁ ?^6s] HDizn]F/0]x@@KGqۀRWA <$ܘLd̖x& i Pu UQgfW)cٛsV# rJGͼ/) NlXd 1w4/"ăU<8ϮyNFRI!Pl["zm}_O+BN]}S{}:KjU58XF$YX\A*8@3JhaG~]WWûq^wf~b%3nHf\StA冸E',-E^?&?x9p$p 1x2qZE%Zxўyg93YD")lP:&&D 4%lTuc hd)AӤR*sQLIG5 UwVgqb_C8ݬÂӪ[ns~~F¤~ht=C[jzMuE,yft=»w}w?\{N/nc$F&51t gG1om.K7vVZI늵v+v4TY:n'=?4QqB]^o:wiF2uKe)p/z|jtW‰Sv-W?vWF_Z~R-SЅ [Ok*Wc8MH2 +M +\ZDLԠeUTlZ|vvj\Ә0`ɁI`RɊr0Tr0w:b;nUrBGfY:"%p#l ~ x ><d4=ؙx87a4'@\>)*|R xx@F=6QX>୍B1ȐAG<31q?P:GDLUTs 6Ny53> y*~QuخhcXjzA޻׽ɻkyp}riO؝aCYD^܋*1 &/jɌ7,FW Չ 8Y *ɏhykilDtVb&m1ਕi~ªPVCY[Qi@ _ᅁEgې։ܲ,@x@}5j%SIʌ HI A$d m5h NYs) D`r%87Lp1Ka ! 9'g*eƆA- nͣ594"V~2D!m>QVѩБZ_# ,;VbwDiRj ;js~r+|D&D@yT\BPr1Fv2PKd{3RhAݫYB:.~#]¹KZe!BthF5)eIxeL93% -eZ kK,?q`|}'=p]?&/`:yt\ö[dz5#.Nw_'S`D#q[cĠ NRǔ <1oLNC4hHAV]Ѱz4qB\[|74 Ŏvtb ō\cUY3CE7ƇWtï=7N-XbOEP40՞FUh`qL/CBs@Sd2ESwTRc93JmH*+x`!IzcP@2UP7Nk]Eհ/Cˤ@/A]̓G+tVhCLpJ\ӟ9S=NC]TՉVy;qKtp&9"K9yMG-;sN֩+,M)`r{Z5@̂q4=)41[ϣKrΤ1%]#I"U&`=ds`AʂiR!8W=hJ&%5lKrX3SS 52v&8K9v;ac,t=(k7 3c6ER?Ct#9^Yk i#ձEl?H:jc` Ȅ^veb! I A@ M4%#]2=a]29]b."uyxVcAδ(P{y{q("A4Gsx`q6;b򖙸0,)*`kS ؋ cMbJefkLRQIDv]Lu隸W`<DL?vEDG+gAm֬m!Y*f"mJ2 P t]D7ltҎN^(:p[u' |>0dFvJW=*zRR6$P7Ӹ>vL*^2 񸳫vHzGY6u`7ڐYS FQ[IRl(?9`A/1aG] $&0I*O-:DTV*c Rd"o *딥rWXHq3q̒ZU/pe*UlxIi[|&.o7fW<2 ڎ*Ǘ˽3^tjXVX>$b"W hb)RQLO/+/4qx3=GTP(%ҴSTt0:Edφ5jG=]({= Ŋ{Gy^2V:nQY%f{(&v7T!T&+RvD/M4mcHbA2be;Ѽi 0]S8W ٘<ސ5]"[H㔔*:F%c@:h')F%J~L*ur+͵\Fr4Aj8OƹMk+;aiW6oOҹϧ^nŢ˧OP^Xs0DF._,>_^:斗6szƽE2 |7hێwokχgtjaiͨk2Om/\$3zP @k/F?,g4zX٦姦C v5A/+?=E8[Ppzua@A*IM ,{5tZ]t?tdz>[F E]yvs},yĎ}g]Գ'|3O,xp5`&]Η'Fo'k]K$ "f >9q  XuF ;F P$'Q& :zV9Aŗwl C}J蘅WƆXcRADӑ_L?9 RlW}~qE$U1hk(LILf縷#I \;|Xԃk2 >ԇTA&(UWꡎ}RיS:}Rd?A NPZJeNX`dt2pWUʕ%WNk"8!ru8<";\U)WtS+N;>O\Ii p=\H OdJእpJ"HnfWymcS?JarS%QI[7t7jʱ(Y_U/'c24sJzXVyϸ￸!~Fk oPS*{L?\]_o;Y,+5t駿O߸d"lEjQDkWIj/ue5v1|{iY{}4KŠɸ"D ySipΗ%Z?N|ʪd T5A8jaL*aWRV+tPwnlY_ [>h3w4Ba?kO?~`2ߚ)Y)Q Ȇ gCrdIӟ?u<9X8O{:*1U)]xz⡆Ntઊ qJkXJہ+~ sj10_??|?^,sSQrk&N>NfNu+~ #j E56:h|'e_?:ڎ^9y>ep$0XbQ+R"HZM';2%fr$$(a[hPw-\8bƒ{6i԰ !8ͽ$CJې,:6yT$fТ@옄bbGs/- Nhd?ճ IGgV.sR~|־|bxXH'HrBgOfHoH[x SapJh(fsS`~?}·|mg-٢ȣ=촱ekSWlLھcT4ZE*M0%@)JHAX`cW jXGtVag˜B, \`o-ȎFt&ӵ=> ۅTa[KR7滯=|L~v-]Ɔ]Vw p+gnO1٭&b7 &'kH\es3֍ܝj>*ͨ]p16w`;ѝc=B.(]=Z dpY!S %-l)SL=(yS'{M~ͺ6K"i>$0CEOh05T)*g40mh)\e_FLD//;>0uNtdp*ӧKMb^* ? i2:'P,=:B+^`lsQdH1O+4Ⱦ$}X:#PO2ڭҢcώĎ ;ڥqoM dϒԗun."Q>k2Q䕩M5" cpRf.L9k>M$K P5g^9V&;}YM* f=;~,m[ex[=;ճunkBꚪ֝J]K+A{]SRʾ 5)Ht>uxհ'BNK>(?^W_᳛~`2mu77]@ :Zֳ}6S `'wOh09*mrNC>Dd)9SK_w h41ѻ<[tt3--x>~ө];qY!1gP: DRA3A{oEN@9n7NI'4Eq1bVPJ1S`1ZKJac]I_KhuշLoAhb'b]Ij 1)iC䶭Q0uNV\5wYᄕS.=q6wdk>áv^lB.9h$[yQgΫZQ:(ݦhkJJ%Qw ډ(V4 ɖBAP Tؙ8O#, Xؙf섅BXX^ni~zyzpvI=o ǟWW2xee .j_=l?H:jc` Ȅ^veb! I A@ M4%#]2=a]29]b."uyxVcAδ(P{y{Au)"A4GsxӬBZ͎e&. 9vEic 3C #fXRi5G(~TTl;iiw3(Pe7eWnL,[ڠLo73A5,Up'pͥ7 MO4 pCji9ٚiD;-yLRCݰ7?8x2KB5טLVi$,"stԊbҀQ)jhC$7U-=ߺw g)i¸tϝQYTz(,xgB6l͸+W_QԨsBΝ(`wbE COp|rWɷOpF O|V,H^U2 SsU"Μ+P#eȎg@?q f)pIEc!F-TKC.Ai,V$8[}e$aɽ^ZX)!2ng(KCˠ5r(nܫhx:Q,(犠$6 f*&b6h< .HiE =%;bˀ"=6ڧIa4%9I#QyA8,BtH)>fJLs4D#ɐeStYE0[B5|\ݔ1f0.b T~m d5IBQI2!P ̵#ˇzoZF7p nVZ3~3\Mf_HN̋J* *Wp%wpܶGu.t7Gpt~^b-[ȐMFb݇K̈́B>E\B@Ozg|f\r/.MmF,S>RX!ND|$>0$lk͸מכ_oi),7-㯗%!t1tHڵ޸J^nz܀Y4)JD;8NRx oCnd]@;[Z)N jwi-#hъaN츨-s}[-N{\ f/5Ϻ-ruzd╅b˼,5/?4L[XPՆ)tQ2mK@ZX#lڶL7߂TQf뒖[ n6#7oZr /H**i^Kwun + qcą/LBc>V_q1'+Nˮ{|tm#bmfIj:{/rܠf~ӓx ocJ׋y'ZvҊ.DNzTܹ ͗=\Po^Nx[ r[-gi*!wR bIW`z1sdrKLNuh7:;2]y]kBɹ>O?R=zjp~-7yRgɷ7\~|s I,O bd/|iF,'nL4t5=`/NO#?Lȵy_E.rG%_>f$y7WA :6YV!|:;<M&3}P0 1՚mfìk摎&/o$q:2Anhu ~Ts %Ϛ#?#'X<|}gv3Ε`~lc[[ǖ>kkpɀqp7,IOF'ӧ{Z ˉbK-X)~#ZRKU5`RU $J"!ύ;Nn k=٤$~6UW-$*WLXIe1j>[,=h_-vl€祲DL FZpXFV:S&F ܠHVv'츍PGP&BEg}NHTv([ƷsHZ/w(JHarzH+׳YZ9ҾBw_bboug[CSC ǫG«}n|ͷb{Lc%U7dUAL'WO~ 3ct> `Fna1&Ǩ},IhyR\҉mM)xbĮER9߷~k)M[c}$NIHSVEraj%x.{g-R2Ng{mExdjď%jC+/[](vlǷs$F"o5O/dz= 龤I%&d∢;+o^sZ^zmFWCA(C:P!@Ȁ{΢ɤ #庮6 1 R RdY)o6K"Nf- `1d?hDnF~`v^xoW_ ez hakHGKv\ +8ho/h%+nLddBmH+%׳A}B 3U2XjBANlp"fgs zw,/[E/6;hGvv*wgy$Ӿgy3ç*oyls$ :iư[+Q9UT_ssh;9f*y鑊`MLD ,%lBuc XJ0EUiRr)i92DHJK>)p"1ZA.(*Kkqgk9u{grlnݲJumIxp )l-,1{n[Ƴzϭtp5"w6YƮpȠ?!{,bYY]W8k[wSmwv_Ƽ++wCq86^$q& ?z]=M\s`IӤ<`ர}` 8!wE#W'㮊N]i:vwU\'+@7 VX_ R.h-Ti [ˋ۵B[Vp8[P0ɼ;5h,>#n_~29؛O}4 l"֧⦋4IiyuӯM#` +*s**R5+7thlEP/~fی7`A;6KhDz`mo&TG.))u*:e@I\0G೶cf6o'?oyhQ@,dr>XbgnԮntcLgҌ o~k!=KIdV[}+oe쾕ݷVv[}_+ErMp2"T+Z8IɅW#du%#IJF^+y%#d䕌Dj5`%#rW2JF^+y%#d䕌W2YYjDn*y+y%#d䕌W2JF^+y%#d䕌,69šl}HAeА&]t%Zz+6|Ϊƺr Qװ[ֽ]u|^N|>۸aݫHO[w.׽1˥ϰnm΁1>rHq3QЫ୍GC6*R<)5DW-Cز-=A]v/Ubd>4Mߧ`N)3۟An }vsjgxy"?vwߜvdO{{Ko,|r?T F~ `[<ێukVmEZAZIewDŽ![hriJe*9lT"\ʨNIxku B7F(uЍI,tpfSΣeKy文>z]I]=#5ytgM"Ribb LP}4q_f"9z"l 핰`j3)À|z==6jɃMI4,6VKlp'd2DKiuAӭ;p#P ; zV+ wщUL5D/;_x/W{dCka|*|XсБh#&fQZ6C#=6%ɜ 9$NP 6YH ;fՊ>u-),JsI֤2D'R)?Fl:g.1C4hW5r~qPlt355>+ҷYon,ö1Ww9 g vnu*@|#fA; mWJh*Ļ@p&We]4;cZǡDb BרkXxrcr8l)oUZ%'i$'Iak㑰F YSEYv!C"\s1EoY"6$ k˫<$d7Zc+/ךwš/qvhfBt J[ EVݗ' [b#r'l^yĂ@Tp% O9yŤ +&wBP u2B~kc2&f\O &u֋R49)*q.erR}{yD7Ã( 7 zgvؘ,YE|? &G;q҈,Э  *o2BV Jk(.8ONe@F(x/H rtPF-lԙ=flǪRfAgq\ .kGW׫^"ح`s}@Yp xg-@3a#&Y$[HW.x#9Y$dȑIHd#(:|2@AuҮ=t{g?lґ1Txc_{zx4'jB7%xII\ɂG೶#2~)'mVC(r3>`0$E581x={7Ÿ)ٛg(_~]K`=nzۮU^uy߁ ydϋB|^U؉ϋ*+$>t\]%9߾8960%D T M3t#]8,'瞲 NN2(sIcЎ^Yɹr.Z[.I gSL=z<+oZ^v4@qf&S?'m$w6?Pn-Ok+ZluEq8]ZGhnq\J %r6/r&ރEyTڅ g/R;PpN6u>5;ߚ"g7'IA`_T8EIm9ݦBĬ\ϸpT z8fgZpgn6KW~l6i,lxo(zcɻTzLt8s7~?Gi0֎ض5+~0AsO?+xCŒeY$b""je ڔ xo/-MZϿ1?fr0)¸e53f ۚ^WӹuAec+Bbr;P3h}2k1ET f4,wN|mŶ?}q0f@o~w|8^CT"(3 HNJ8əBx"kވz7I.g4[Z'(*/{_rCTn{O=^>@M/1̚/vS(k̠BSfGN$Sd%k).Dv]}Mુl}f -gKYN:Ftkx8+fΎqֿnr"R5"DECH1WF'L2 ?<)́#ɣ YB !p^p_透(YKMs$Rd2iVgqH y"OkRbT.N:AEžēEGE;≔"׊'P1eUB)@ zpjt2ѫw|Q]OV.Ӵ\: 8#YBe8%[-Ji>0Z܊ږ+pl|Ӿ_1[R>xV('i, AFQĠ]" H!>cI?TC^P8:#.^O'TNamLS["N 5mM4eI[*Kr4pUX*[kpV \D8'B9\=`Jemyz 3JTz飧\Wrp Vh*kU>fɾUTWwz6R>Ԏ\vO8P֎(>6X^{/`G{G|-@ Lsn̰W0t6ͶG\w"_lruq_?ik;.Y(ךY +zRr'ADιkRD t35 pFO{ܷCACs%gǢZ%TߕZJ*9,8Bs5c+:9ZdWHIQ\e+4WY'ֲ޳lPAUGPsi<O;sC[ NpܧV;+_RT$1u T2 F۶آ327'P7WNߟmx7IW] tWߑI.+b<+}vWmژ:xg%4 i,a:dɓ.'sYd@!&@aZR7Q{J{ yŕ7~d!PPp./1jlOC-?Gi3_/ Ny Wag!}9F Ѭ k)Z k)Z⤖[KA8 k)Z k)Z : k)Z VKAR5$iw ]o~89}2ٙ9;l]1d쎃e6&z٤{٤ e6F'H.hK6j, cB挄yfJLi[.xQmae&Ph*TG#u` `AJKdFPTX[3gpG+]>.N->σU˛voy^Ӛզ:}u8ߙc_ hy{5lf֟G&erpjrx L+H+RH/v4; p4; ɿlM?HC#J]6Bw2g֤&?ӛ)zuyZ"t_F7kgt0{G0HPHˍr1ք#Q("Q1N䙌c9^:y x2*X =y 2*dHڢA8/V0@60n4*A׭~<oǿ,訂ʵ`2H"=z|I'\&iUN[YA&XS51+(A2fԳHQie7t Ćyad!Tsw:{wV%0^ $!2u9Zsj-^Awz;&kxަ g۴H)^30StƯ6 W6XM+F-}os;]8dxӅӬkx`rUi7-lv^fm ۄ݂1Mnm\ _ ]7*V7^Xf5=Dݚխ)H {U}ޏ7G}~ǻ~7ڔAZ|ץ_:cqbg郟/ǣ7!/Y}_qA)$4i!<gVUzZ[%D@: QȤ2H\g)<$kR6 FgCp=bހq|Vߥ}Wވ+9id>E`J TB5ԡ"2yA0aFS4"9DhFVG#Ո!.Rg:EKʧ{\Z̜$zq<gi]E嵣Zh\)ZAAWTe6RM!5fᆅ-rx"3"5iaA?j;hzM/#,Ii2BOԨ>p‰&1:aPWJ9 S@`(;U6F)D;4$ruPHJ@mBRȭN bTh6q.Nnglΐ.ښdyX~g&4n޽լsm~5A쁑MSg+}os",n &׶v4mn7 H]r?d񷾿I4v;b[:jс3WzyK\~Qkey.n<wJVisɇrOot%HxD6*'c? /K}g"(4)A$8D{BZ:qbXPd)}ER'9OڔT]c-|2Gk)xŢS""@CjBpvR|U`B(+5xoC2RkXW^*'Yp77H#oëYic/p\⤁Z@%,x#5o8XC'LU5ZFj0$B3Rw;~<2m#%r$(T!7+uK#.escc I2&IψO9T3g;1zB}bH/6xo*p-#Ar+y#Q~0|C<.[/6&WB%)wI,8 Q5"VT2x7ҳOR6bC{1)QDnЧ`]:k֔z.?{ƒ\ ?&j,`],\ղ`I !ES(JTK$`cX3SuTu= OBYt$&(E3:' B6JIq+M?g=VѼJpۼHy##MFs~_]o3gOEq?E/'&׷-U{mtud/ܿż-Cvܸ{ӾR}ōA֖1'ҁRRYV*'3uœ*c[8 qEy-{}:qNk;:JpUէN l^S88;§n7|e<C~V/qfV}g4]"u iJ@t@̫ .GrV,d A\DT r+Y*% !ɺb\(-ǿS!I#t)ퟔZ3 (u5Tte{; #((J7q_ʽvκ }nÄ9b}՝"}Pn0 n+v5IlWзax^Dmv0nƵ+[lm#O. f]vH%O(5<sX#!d=@ȶ60v./):ߙ3dJx/>-QLd"ۍSjDv+9/L&J>P&aI3=w?'ͻf'U^wH@ ,:BUlQ.C0:-)j:)/h\bh/RI*X 2HAJkBajٍO`a3 ݅XsWm_k,rKŋk7堄/ ]oWO sENG]7&SD`;[2 s^_HIFՖje_J İ%0LvgW2L+qv#v5XPvj7`rddr^ *DYJfZO2 CaFL\bJ6P&XR"F`O"SM.Lx;*0 "6ӏ}Q6FD9  Oi֖ *3U Lg洭*?WZ*UxIkj+m/?o_̩|my8+/:C>kcϡQIV L,bah2o/dJə TN(Dкv/kOtiuVs6+hI9 %bBq(M0ɣct`@mv_p/^Xq/iB,}z<>.c56稿ɢ^)*8^\d!PP'G0(3AIZcHڦ5dK//NѺ0]E'O{}qgϬSZ ֐ 'nF2 P[l8#!1UW &+JW>;0XErbT(}5ƹj4ѸtI |TmjNn׋r?.~B?[hQJJMI}y"WSҪI`Zkpj8g~%lnujq\qzL?@=ž#c^i:bz5΋rk۾2CTThv_*xVk0lSէf[C vGD_p1Wa<\\/jw Am~VG_^o@>v106FfL"{5ٍ8t,f9fs^>w# $eHBP T+K!=*.;΋^|*T+Nd+ٌBl7_*w_ZWً'x>U]ƻ mkfQ^8 Y%߫ͅ¦M=XS1p*[]MWǚ57.{pߍ/kPy_:|2_m^K'Fɛ|Sf'o`x)R2(@rłd.bJbqYJr oݓ=Fȫ3v(T1o0*蒭G+(P!k*"%`s`_o%;NYu} 9} -N,8`0&@+p6[oaȅd)ΐv]\qVya`<@ߡOa)d!uRA0'K9qƂ2DAiȠ&JI$%vu۽uhRL(L|9@VeI_3qQxtnWH/+ʆ]~e ~ba3+g{2EH ! r7;wXOH{\:UuƁY=ͷKVj>o8C#F}BsݫNf{éugi>׽J04F3g;ӽ/\Z F̟ಁSg ǻ#ɞIwFekT3TNaP 0zm:1KgmiMk1nQ?U">ua5e=7vEҵAҗxzbk}ցb>x'̬ gAu:m@(y.GrV,zJj߱޺*_YnlrQdP$$JCpuȪR };Eh;}ZhC>ƚ狑PS KDQzDLE_ +tQn;{ǔuϺYO$~O_ڲ_i['iH>y 3zHX 7 pC*ܐ 7 p?#="-X)ҪJ*EZUZsEZUJC/XL }ǍgLrnb swAZ)G" cC^Q'mwؗ9kr8"Eh94*"gd^.*xK'ܓWeh2em2ԑ,}Wga$HR "[M K&;bB@AYʮms!#Wu6ق[Ye *D&8F49usV6Q=ei&oxsw,sZɞsZIL9-囷/Aum}UsgD\2nrڎ#Uzvڏߧӑ$2 +e%N(Pa[yuXF' |%E^'eiv ЕKb.Z*J&].>(c)&8 A) A526 d)f$cMp%Q=׺uHMݭgwl|_ j6[!cNk™d4>vZA`L -ըP1*aSK`"Ϯ'U'zx(H[g7bI^s.iQxj v3.&#}4]P!R2Ӓ|/aFL\(5bJ6P&XR"F`Hu1Eb\8wU ǂc_DQ8 SϜ&F11\-[BPHDdu4JXg(g)"zB35ZN*gb ”ؓV79+*EF\rE."`ǐSyXHzpU*#ƈl:#̏:0.λr LJEhj&< UH"ۘr1^HlkdQ6bɨhF7sPt싇6  "~|5AobueD1z 5kᯊg_vˀާFAȀX TLj;f>&6Q#0 {cos_fžsdjG|^M!DCh.bckMZmAK b²YW{uTߠeKO?x,/yʒMAImXj ,`BWVcP)bl~7˲}+`RTdeђ}4豐Cڟ9Y]5- br_1O2e]Y#RFM핢؁Vd}YF RJ$Rvϯq\^'Ê:RQ$b#^^"(Lt28'+\r(&bT0JKuYN+bf1TL8?Zk+EiqԻ,"Is&KeFߕ-41yV둜VgAC֩+ypnI>$"^Lvt$?_j~1OV9~{fwzx ͽv tB|@B /sgu#cG.H m,foI't 47>^VRя"{tt/NU1j%̺qiP9a]h"tb|\괿\Woy >^ ZfiFw\cXNky.2Kr'psaF|{_z/cF^k>߱][4Rׄߦ{&oY,|qRĠdNFʶh# 1aX:Gfm,K[Akr$TAf4I$HK!( Ogbj Pѥ<ϋˋN=fxy~AXpka'xx' 8[aETuBCT4*WŁʢ!+C*i&1Qck ]zo x;Fw+gûotBbEiQuzK;Iw2/=KuzFnڽ8j퓯YRh4H[Qv(UR^U3l[7f,j@۠jQRA R Z^h(Rh ALMCQ:-\mvl5"ڠ)PȎMp+UF' zغd<c^ߋח3An$n~QA._ϚVDs9iK^q jMH1bێ5Pqfۭ4|Z뵔Y"tdIIhϒcV.uy99XYd6FTϼ(e,Yvc*+(PwȊ&T(gKƄ ߍx5#lx)ia*<WVTrfُ`]ʔAE#%@`=G֦QZշ"+~NvT%~gڐu\0 M&fpF¼ 'IGbFՌjh'v 0<%H6€.YBt1(ѩ$@ l)mabGH4B p *L|#)Me:{fh"qgߨ2`ȾAl&.}<-o.Οm(߄׽Vh庎&o~&˃>o>A?,;w| G|*|l,^qlZBLX'痳T/fW˳ɊÛѓR57ϯݍ=S藔lLMy@0Qt&4f_hc?nc!w[PlPi>|8\N{$ZiokR%nXk|u#z<~5~O!l;arQSҤg!"I b3 o"UDD%$ 8>ΐA >3c. )$MH*3ʣʞ KJ°5X]cBl:~);;k!HqaڭNpEMv\ug')ϿuA|΂mE]$̪V[2 Dey&(FiGjh0j2 φڴgOMnq?0|ۣ?GGіbUې@ϞnG_W&Ŭlw|`y擳y^"% ;.R#emTg&-'i@./iL(vF)m|ȥi$v=>xFEZ}tiUճedY]Oy8Zl{u Zi33MCI d KA7Wi JO*}ckI."J+rEW2H@BW`/ B)vQ`n!1LsߙVSSjB&!Da@\/9*uܞ{Aϑ5QLY1zf<5y^,jMF@N'78%m%PLq΃(Ij.jYJ-_I9D J;qZEIǀ#d3I:f1\%{ulgYwG.pYW%J O)OSsXޯxf={e-~Q/<鬆ܫ9J`1XȒ6D5+ H)9̉ӎDP;lb5]742L.KEٜ$3$PVL(A]k<:F ldN: jB^҈,]ț2V}hcGAgf{&vxr!CVRBmFގR?5Դ!)R()YKvbyzqR)*yRރ)=:L9(R-ga-,@:o4YuT* b2b}-JdD)df#ٔfьٞ[UdN.g qEA 7ƽ1!&(V{1YtvQ$lHZ{;He|MAzK9Kq1@EuFN"(g-'fmPnBPgPab,!՘ThzX-SDWB(^Rdp!zA5|]icI(/$]T @2&1qocGdcJmv[{; m`[(`PLlӥEeJQޣS%_)}U_Ssҽϼ sLvǿEzwX.n:cށv'T֯1`U\N,X֪c/ VDݗ{e,sI9^G}YU@GTEorR# D#'ߥbN!FZ"rfb`aWh]㥌3IQ>ōaAt`VM>@Ht JF$P"EͶu" eД]kZ8/kql'26P_e 2A"HcB`95Ǯmt9i؋IF຾=훝3o.o.<}%u4ܶU$1>x|ۃYY'gs†\1R8ӥjolY=2TޝK"ImMrYBQ E+d+AdIKLILjJC6sQ:vrQ z(Ut(s([* Tؙ8{gꌫtiƳ~NXX^KeElW-vg,OoiOhp8^Mg9bKd evBmOZ^MlyN [.Pͷ:{!Kb,æ.YN vaY7y[vݰ+qv#vy,ݙvjQg[4Yq1iHh޻ CR2Ӓ|_H &E1qaX)c]H fًNcMfJe%5Z&Tks1Y$vɅ1v&noLF+:PiY*jdĞN:|܎3qv#G¸)ٙ<UǸ({\q>ńGj 2&H}Lɳ;&$![rx4;+knF/;qp%Eî퇉ˆw)l7Q,j% R݇D2?|JdҎSP6QaZ%+Nz0:fC2ڊQ x#j 'uDBye2\*8F@;ޥ'e=JOʆ{^G1ͷL(M^IM xeS.*r%>H R!9E !1Ţ0* +,/}ZLN}GEQ2N(w\ROHCu1UǮ*ΛEFq4D=!=J[ӀF҈y~i(pr:\@`b[5i8d..mg) F*u5GY HIvkE^3Kq& ȑjU! 19se-N:Ft0C8ʝPT/PET4sU}/q!h -1sI\p$"y0TL ܿoNtmpZjrZDQtL@w)"pŹFaUjd?S ?qhrxG<B \k:tܘ*!"D zpkZ8G*ӉWRo?h3.۴ں|F2_4Z'1%TF[%E),08U)atr'"([.to(S_ˁ%'D`Ip`s6J %*̣ylXn3ӐSCA?-d06&:&TѭDCY$ &/lP<ؤFzb9rUխbO9\H!ICK!֙k~G׀>C'&׊I A$&Ҩ 1˭1!{CRS(&NDŽsg}/ԧ3Ec2v{R54 wu wW]KXwz{RrB|tQO*0Z'\0RHNڣ灙BI2e5̀gA}sz2.Z@yj L!d*843;C*!1@H\1FiX I Q(:_Y,YAg8^?؍3{s dC(MnOɌp7KV jzg8D&kly3CCi:^.ځY2~8$: QaZMGFV1ݬixf{&iu\$tmB7mzqUL&ˁ GGNY2آ#Ykf[i+%u(O3gN ⁛$ #ڧ|gJ>JO b ~{Mk:+ OwČվv} fi3&jO":O%'~"5ЧW ƉYR!Eӳ䅽Ǫ .Q7O3\?}맚FT\;jw9TbBWAW"Z^9l%q  ~׎]9ֳWy_QD%%NZ(d4ŝ%"h`!HuL,(th CLg_OO.]_[opޞToi-OB h NA$ģՄX@svR*ar&Oee9omk]tc\3&wxNJ 7 7]We֞Ee'8/q^>j+ӚWW9XE'NyWs{5l^ ږ*#)x9ݟ#3`NKM8"Df oݧ/A8eù1TR$‰$ψO9T81zB}bH's7ozt䞏7BԇJEqskg`ˀ|?x;VehrQ1R% T D.  d { ;D0Aa5Y$u.-ɚ͙),܉!u9R >! g9ӃͺF5C_wCe?-?4 sz0{XLg9 T+gp*x ElѬuQ Br!D9Kmh["uI{ST\&8 u5HN9n;y+lLJm>XSNb,_ȳN|OjW= Q!R}Mbt`FSWJ9 !@`(;U6F i cpruP!"e(b0*Q I!T¼8=6qq梪I="aXڞ >kԣ9>3An#]7pgw˺Ͻu?ãWyPU,]ug3:~^G1Oɽ [Www6|tnoz6."asa}֟N}GO[mٕՏ]Q/GDk nTs=uӮz?_o4P-m_77(rAP`*ԓJr)䲴ZΉPO ֓3FR&.R] \eqҶ KPʯpup:.=RI)A̦nѷJ@a?}F7c9yZ{>YQuulッ$(bDpU2*oI)Odףb\OxUxRww IJlVPaHe+/-whsFr0w._-sZIo(4XZE*R%<1gĀ9/SQWS(+Y`1je'DLu>ʛwyZ=4GO߬ۛ8ӸG2oVFQ.pV(w,QD袻5׭]Aul>]zsrYa` F#RK&uBq.`_p ߽˯?pqtն3s= =itl8,Rb>0ȝ:]<3!߷+oc ?c_W&(9ġ*IQyhPy\cdȓւnH[ :n?>M_/ntף5?wW]^w Y-l̼]1䶵"ge{?_mp@!-@6=}=xm \FqX\[7];;9^Nt=R}IK`YN'-AyJ eSO7씽AOܰ>Jnfg8)9js* |M:QJS!;XwDP]Z&QZ YR('t'{"S"ٰ%ܬr'`mp̸Kvzk5uɊA qvW'CӛlZ9# ~?ϖWn TUD&=\`nqs9eO./ƓCi'd[ +b*ĥJ+);\e) D?\ů/bUL*K\IE lD!dqŰ,},eBWorE ,D!dq Bi},% \ijaWY`y9*kإJKzpЗtԎW(.+V d7WBs аЩːe-m"bߏ~O&}/½55&0dm]};i[,jBn7uNr6oWPR)ַ6M8Wx[b;/{:e[}%0-VXD(;Kb fvhC4*r]/oՒnA׀R")e`PJN%LEK I(De>ࡃ,B 5^@|6ٕ;,u惙3`0bduCJhCt7e#K(qk7,BL  3%YA?A. j@!o,Y 4\-J,0l" DŽrU&B 9Cє ؚKd'`wH',Y;A5Vec'޲H@)BM[Tm4XP@ JX1LP_lkٻ&CbzkC&48hk}2n:Y#|~:«<|ML Q7)nn@3 58)QD`Pp(r2hu\Hk̪}q%ZV . dL BY,#2 rRZTH@aRB^`(6%T͡2xy@D;=D˱Lwѩz e % )kz] 6Bf{!V&b%*cIN Q!GޠOz׻*/V2 -) R]eh 5J:9aƻ:P)Z 'ҥST.q z193UkgΘI8S@hIXAR䳗Ld D P9k듵Ok㙵ZO{ʷҔG|5L1A%V$NYPk Dǃ .L:JVya2ia/ZVF vrTC%pT{*dMHOQP }J*). N p@淉ר\ JC`Ik6VFRz`eSFwLf5d1CP$ 8tAb驛WrlAHҺ?rԮ#JRuEB x)XoзL0P5vKBWm}7כRami%p6IT38/ K7%\noȏu3{ZWٰ%;!C]+z%Bɿ>U dg}Zḵ8zA|NЁ@O  b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v=]'C;yNɯr~j~a۳)רeEsJ}rm Y8.6GxOh<6(I36?ElV‡sEnN u/?m_?ǟy1J`!!o]pzvuѪEkϚZhQPts >zP2"Ytw $͋^1)L/>_/}^҇ҟ|-g>ߝ"<.n"^~t&SB҆!=)݀joeSã&(ux rYnn Δj}9'c\7%|=Mĩ2rW_DsuS vbi Pl%+"w/\^s9c{(~ܽ o^֏!rܝ-fiva_=4b54]9,fsc!]~DZMήN>gC\꺬;? ۰1j˹eU btkFo`X.oWo)eTaT7u5s{l-gFwrZ 9۱ye~:Nbm}}An/Sob1pki 9_t co?zyW?a9/< CqOk=?s3 G+4={W .׳߫61}^t÷w;hq}L6,l;GYRD3eQMp>mjɪӧV&|ToUf+/'Z떗Leu/ w"ּ0LKUiI=\rTOǸo@u>/@ JNT*ʨ eJsOQS6}|_{>}Td9mDs>JȠ2"1+Α2C wK,ɸr:F+R"?WO+i^C{i;C.+@ BT[aQgjs?r{Ni>Ä^L͡ z~[/v>gYv7gMsˮ2W+J7Z9Κ_ق \x)~F5NV jdGO1߉bb!$zz7X j1sj4x_eX%8RT5'= >zK]J8`7Gݎ2XuBu-SiOމRY!6.T-m5.%S{sO^V"q]s iˍTd..ⴏwAY;>/hN2IϞƨR1CFEwX1*|\Cp9[>@п]ª :TӍ 7K\c-X+댩 ɸ+!6KYZe] *;+&%W)+fp; ^Ӕ3CR^"0@/.؊;sYRrh ԞV<]f%dU i}> u[\liFL['Ǟ;bp#,0E@\E_UUFt#/}P \˥qZrTZ>HWV7)ɜ!Nb+/2n$yL}W+և_>BKJhiM \bsl ۢĽ"&霩cBBssh~qA6m>'=e]'poǫ'ǻŻzl 3|g*7ލprqI-ܸRmx &+@lK[zS Ƿ]OOTWϔRFꌭe0NW7{ z \J^PEVIҢK:%b6%18-"QI 2IAXg+*QLDBvg76݁Tl~sqvփÙjnm|2 jU"ʓ(NzP3$E*Reі[=}~sgN[Cx^:Y=- a;;Ds\4q`:vd$t6Z,v g&PN'Iґ̃)'d [MLN<$u2 5fi.1f F11szR@1[ϢK,NQQR Us~b\ӌ)‹bF"2~0|yB^| `bc3fF`euAN[H*(-'c$ce,7< x @\頄W"jfn*IЦtRGl?-+]L;:ڛzlA|4kcӜ#l 0X6 o B E0K]FR6 ̐ O&R,5* Q1H8I;WsdG1!0vǹH #"GC$t3^%8fU=Ygm#2/PB')8Yiʣ,#I%l%fBsVs~N=b]KbZr..¸z\q{%$b Ec6b (=8JLbAslŮa18Ea<< l`XCHy4!\._Lh OH)Q)@1@3Z.Ydi8$b WW)'@/uTwd)\J PJ K*,$q.iZjΔs9xeRi< A 0e˕aܠi_mU$Ͱ<bsL=0R|'!oHXB@K'8m6s*1%AhtSnPD6V`Ƣn&h ^j˪4o3)Oi*e#%;.0iebz9;Jy 1&۫_kEK sÂ̑˳Dgk2`Z53T֐ , ., `Jd$ؠ2`\5xy|,Ĭ,:g$ɹmIyrZ;,rYxtn#oaѤ`XݠN\գ y&L;,g>;-˚,V. ?I_}i?(PumEs $vVEqΕfϗ\Kx~krlnskST_g&7 4Ӡ-6eR֧5#mwD8ٌF 0> &qMZ _0)y&mޅ/xP=s!b&xVB!7ߪ!zOh}A`f\_OX`b VJk'Ӯ`d5a #Ռ\h9u㇫zUjul(z!ɻeDW\pވA¯+Uq"MRPi2\% V扵-%OiAve\;y%oBM,˖14EŔI:KE."rt `E)kSFsb\5W\²p\@=s} T'h`#ͻk?@*A͒J' 4 p.T?t;YB꾿w}etP\I =4K9pBY#יi#6˲ "m$jpk?H@IVe ,\Fg}DF('px.[JZA,qK y k i~v8 _Ҳfݖ֛j)lX}>%0Ҡ$W5|uz?‚gcz;i[0Ŋ+ᳬLr;IܲtJOZgfŵHu٢o@; ʓe k4z!UBfr8-2e#TEI:nQ}+95>n- 193X/$8'T̽ l>LJ]bgVZx[NAgyvjeqo~}_|*{ T՚g3;c2>Z%BX}#>FS+ppD@]^q~`0% |["{oUAj*tRS \:_)H##؜Yɘ3%T{ `NPXa`(& <}R+ɇϧZ vAkoV }Lu={aKXL(mrp 1KSIb<(ϸ&23/>zZG yTNFE4 ;փeBAzŝ*0i804{)6eJ-d $ lD]LH0~iYE~ܽW?Ud2<x Ŷg|M#c>=rwV|t"uՉi`G:Gjdosiƻ'z&fو}ͱ{8po`3ċulaE!J}(PɭEL; GwG]er˻g7E(M&/{ ,9{t6!3CYԼVӟ.90Ӝb֥O$YOs7Atkg}MUTG]7Rz* Qs"ɵHHŭh7MtN^u7˝Mb#ZlXUG]Z1~ܟik-?`9P˫g~1:=l] .WsI,}#cjib|:$\t࿏ &QVt@͉MRU/| vZ,B}c&za4(RdCU;哽=JTŇ^wEČ i3TL`  c&q gujKb)DB2Id}22f9<rR%ztJ @ZKT)j-q )MsLo[˗G탩v۩r]&ůJ Z!B.AIT!+/B.IKVrYnɂիWΌN$P@#,R4G;W"e.wHi)X2Aڙc W)rsKblnIZ*AfII|YQΪ//=Z& 1TE#[\B xm "ytquL \_8Mad.)ke@so8#"\&fea3.͐F\`02vZըV ~'(h wASJF 7D %C, "M ʠp$B5x!el6Su4hq^'yDRb.x@%CSc%Nr`mVDHR}Hߌo3g-pZ}`Lc"Ep,zQ$6 O%E3E Q qHpogaRđr{NO7 kej#!%p!Åu),Y[ìe,k {kȵQ5q&F7QO iݍMO? ZN-{3QS\ hX8W ѥOqTx$6\`d1mAhh#P6=\C)Mx=k Zr"bX**689 ]q DDȒS |ޛ#+JmW]{ާw1úMTRNϟ0Ow$#ȕ0!JiId%7d֙LU d$ $u@ggɁ%1VLR5q;y d2ϛꬮ}=>SN=~Y>gi ӢgyqFv?/y98iDY̍F:dbs=MsAv`YB+ZWHלBN>^jxrxW9Y(w7KlWcco*`߮?䫛kZ>y8poilr{^.7Q=^VRen9?cr?A:6nl$&q CIv]dEJCW \\UUVUҰ^ \-޾a5 ׉IkWIi\ٯ+;ծCB<$">*r8*";\)9V b~zV]vG37~j21)y1oV#?&1t3d6? Ub!"/,y1`3}{vј&gi&2KM 19ɞ >`4o~hw#<{ Z]=2oF0k 3~^ϸʓ~wG6j &,6&ÝL{ 2j6 rAiV,sB#4 07m]r{^&aŔjdWg!|Ϊƺ8gQL^ڞS!ϣ ۴v+M%8bXc&F3NݭFaG8 s,Ezƒ<"ݭq=O`}䊔OzrևDRjXxzpA+l烁">urVo) pzJ+:amtMgE-b+Q 복hx/t!\BE> S.1V9C˘M'ggiX0lƒMrC,V/|ݟxM!vC#BL2-McXoOSusWuIͅ9-Fb<1x/3AztK5 l,C rO w%!QO ޻e͓ :V]1Z \|Gb,`.@v:-62F0Z&Ktb)2L話2lc4qFKr*n;j`LnR;3Ka@יut=ysq^ j}(?.|njfl,X>Ƕr n”?lc'.д\RwW=HB"=U;/Rk$X;e-!E>9ac&wOIJ M;&P?;]J&_AK}k ~(v˪R#I0 0XfYo[3QĀ!c1ɦ#a^S BUrh!zarqçQH4!GsTLS[ht-{ojonl~O>7T,玼+7+W5Eo%_ Nײlj{sJҵgu#W ] yצkaJbWRR+xJu*VLH\C>ֻW)ɫ9k~S'h_OPԺc@1bZTGw zݛn-;w[~MNa9%ZZ!:2LE$t 1ḧ́r -r,ָWy9{/>rljzsz_Vݶz ?Pz 됹Q wT`LgO5lM01(x!)[@^[y2RqٯFHeqGY/0f/8Q=02gKwL B3F *Iʓ.O{2l^!e~GC:4Rq'D,P)Ϣ,UյvH1$*^9gX)ꪆi˙UY@!l.BBOFRۄ1f&T6jQ9dIGY)'dLΡ*!N@Ɗ-dV9=++ʘpM꘭ѥhrFST2Tmd&XTjQbXXxr0w]ܲ,f;qggg Glns4<0 ZxQdpP,ANˁTPZ[FN` p.Fdn(Ξ 9 D) ¨:ohZC,hSTK-qFl?5/]M;zګ =0 OKj@>q!H;"g8GMH-q!XH*f H k ɋ"išHJD֨8F ,# T'\e<&x:4T/XM?vED0  ikK*"Z)'lVDs}0T*Ʉ@4G- [eD&ÈCfa\}Z묦%⢨|xoDxwmHL+TU.u]늓ڪMT.<2ϔ#)9}m̐E2M$ƞT+`F!141aAjpђ 0:zN d( )OZD&^/>^<}weP@j6dzf7OػOaMAp}E?>q)Ep:nSY7QGRIk%]55aJKLJ 83{!BYG!ݮߏJkR1hkRFi!$ d4IWN}L(eyrC|Dh+x.qt1+qu65x2(Q4VlTYCځ$= jPE>x" Sa(bܦlDy= |ja݅9r]""`JxC{A!HJ#DҲSB׳)ToM5"Hb"sM$'#Te.9Bu6&nibƖ˄qJJ_--Zb1eJhCPjrʳP6#2m ?g-eN:[Yqj\φs6)nA@nWYO.6]W~x(6_CMTͨ+wxTfM2߭J@s˳᫩#ơe7qj)wR^U)e$͚٣V=(naIuq^t-8ЏyT9RY/Yn^'͛Ll2IJ[2ۍcф ! h6mG^[ $ݑ G0x,eX\2Yh"͂=󶼕>Y-cH|'*:~βwr|4+C/MzWH njT KTدd8me_olpEVX^̨ߖ$Q)SGړ.m3 VF5I53>?(W?zKZwX-<͌m`0NP&޴28AY1i-4ӖkR+IÍHM[(c[AKpS7hI2%Ђi¸uAxI98c6OGKHdP!kKg '8Ձ1[Ô1( p٫ET4˯@@,("d2v@26X_kO k_tvgP|"wF_j,pLP|Fȓُt^bE 5ճַW|%]4l<wJӋ)ޏ\q~_~.K܏JV^$6 2J0 oo~{-{vz=_8BSr.MoHz1ot#ݻMBMo:mNn'y5-ϗq{tﺲ mÍnXh:^jUmNj;s [<|*cEE'ƳໍmtuwpT7VWHLǗ9 2uVkjeCb/ĭ!- mʒS|^.e`/I]V9h˘%hhS hL}7RrI{.մ@1W s k2$N9;!zHnzՉa&ʷ@5=<l[!Mڡ "1ob&e%:N>%uKD%3ɭp$X\3W#יH“0+OcWKK E6urY`'FFne)h AK5(l: 08vo9댜NdQH4<%~Դ0(tF/~D< DzO'&1.EsAO?uṂ֖HN&K28-@L0I+2Z-CΠVz;1>5'# 9DLkr@ 7a 2#$W(:O|&Cu65(aՂG <-mɞZz%l^rn2{i#Bx~YBL=XЄoL4Id u΄3" (Q(ˀ>iM \΃MM_sgՠY3:=Af5SjCʱnB2>akʪ59Q!'BY2%-%ӠhˬhdRc&FFD X%-ɷuMrpĭVyH\-:MF IzQe'@`d@_WC C6U܈kYZ}7K m+W</loݖBl @ `ڀu'«B:=^_k&lDYt:$@{]eȪP}f=h} C%(VnVV pXHqR~lw-Zm6Q'?݃QwL;" F*P=J jN{ϼZ[+2R0.NAz Ik F04 ܆Dzq cR& {Rc9;ftU)bw!|vl0#~̗wRc<ӷ_֦s6 ءJ"qlVomf!$.$5CȄ )A¸dң5ƒhˇ־*+/@1X˭VF&(:DDH!%!M !XS,A&HNM(̠̍cJ`f'JB7xT .p8_D(:V ?/k5=%Ld&(Kz9#$VFwP)Ҿ"[//e] N g.xDF<ֺlBby:0 zӍOoSz^|nGg:i4"6@"3Y*h6(ddOsy:<8*|uwܦ8O9cۃM΢I\!"+T9Fxl..[&!8:IA&>o&".CC#@Qז0|@jX/uċ/~gVb_柹!)Tc4N&S!+R A >d(\ i,t5)6+ 2Ḷ%>Ѐ*gI%xR8 f`KТl}WM&mzߑeKQ"XsPM1f"|_2w?F8v+>lWXf=m#~m0)mǣtnx׏n'ϛɿ59-TjՓ+;3gź?&Ύ~GڍU}6_E$fˋ_2KK9w_l s\=R#cl8Z-?^pv9«h]^}~ެ@lӛJ;dҒE)i"p4FDS/Ir6;+o_YPt?&yj:xE_]B͎7̦\us/^Hps$$_[t~޵6E$7=&Y| vq &łOnlɑdgg_zcݲ{0b~*CJX#wE.lurl]h)Sn?Gc454Ly-d Ufϲژcb񱷣b9׉%Q[/sUk%>uVqPFG(بx[ҮE8sRޞ]yӎiK#'xΦ__c^iZTmx0+7$4^uky E׳c4l411ϷLwk$ѰT9XMQ{jd4&'͌s6i=E^[n778w m8C4Onaeg>i Cce%^,훥Q1a%lJC6rɬ ̶3iǂS Nm)B(w<.g곰R)s1>mP :'!X]i1B,UXA+eթkǭնQ;Nkǵv\x`(zirRQy9:UzZ+z.iRɫBihJ B%w^T,ӟ^KsYTgMՋ5q&!f:#x^D*5z&6s $Bl7h n֥ȲR4%YK`{nFm5rvtQKO,qI[Xw?C[ԷMW}^IQ ,b>ޕxThA 2JV˜\b23H ҒܫA}YݾnGFQB(sQX&-)5P68ٜ+br;˾>s钂x'0/FDk7u}nsپw&7/xV$B*("d(8r2aBT:d@LP%lU*{"r:Hɪ0'399nLi.ZK5~v-Yo@',ojv՗B`SF,_˲4ҲlB)󗹊YK#$Ma߿Qڵ>=BqT\yCҬ׉` :!ΓATH4?om,]M]EE郲}r56ҵ$zߖN]D!Wb%a*WΣrBwtR9G퐣vAd͐gL!ܚCW6!1hr9pL0j&\JZe =Ryo "1Z9]&U^ӎ9{:\xtmNsyt6 ߻DŽ;Z:uŞoS7Sofŝ<JڼxQqn/]v ZY|2n1kOQ:1 :ƤP*E4'_|E8)]MSm)3Kp~|>ͺ|zT|M`(|$NI7*1 !cN8{R;gm))eJۊ6[]ԆԆ|!/\`w=-v|~7cߠ (+}2\}2Z+*ATj}; (r;!*+v2pU{2pU5pUD[+{IpuVvu Z*\HeWQi\q?\pWύF\ U!W \j_I:5P-jZsCsyڰrU;QG|untrԅ ck}a0C#=+z9wTEEh`XK%_}6Jv,LB.SB-ȮtRZ{i09;!*\{2pEWJ=\C(+"vUTPpU4wWJ WZ77n2-lIkJO78=T1o+`*鳪0X^9mJ&_G\B<%l&'ͅ\OjLױP =bs9D~BpU \yJከFpX`mU!ל[>\ȡwWhQKsBpE'W\0WZcWD%|;p|#YVL1akF}ec j+=J1Wz BS+:\*]p"pE:)U!SB+FfW^lxU*neyFQHj^{c ƣ`x:`—jJ RDZJd5\X߬պ_8-uyQ(RGc> _777Ry\k+3?hWY$QI`F* ҩ#pKZf3/Mpfկъp Z`srwdSZYʺR(6¼sbPy)sVI*x2|D!I;t O׃VqgDE\ل6 0<Zb1 Yfva3M=P)*y% V66ZnsmDhhxjC!'NO;Fg\4i8mV9hd p1BU ^Z}oFc)D튱Z ?'d}B_ aDsi\Ohvk쮾\0C4yR5-a&7)CE{~~f^*=Zf|-t)vOhTOT6DMO Dc΄RP)*V&LƟsN.U.#˒SL,a0!۟ubO,M$ Y +0CDV(fLɠ>sZSj "v 嗐}D@&c)Gƈ ,srBK!?4A)Α"Θ؋@"pV=p`\5r#< Fd2HjVȜI2LP$VkV]vRo'-GÕ@d]<`gib` dpƙL bRF\H[F=p496ꓻpc&Ñ[{9HƌiToيYkA*5+/*n xm:hP:tޤ '!%w넥+tJ`n*<"D$UCiW5!:l Ck1#1ckH dqTRD ,2O<cUa|3[S[T-K|NiX:ѐR5Cgq&Z5ߥf׃ղ뷹q"QZF.[Cr~XCM,gkU@ 6$r8M&~iMʫ\1bcUY[>KV]5t.;]-EʒYR2 ZF'ܡ)FJ)rA%-(hU 9b@E}}j2謏I*ԎJ#rmko\R#q45pv}虽۵y,y}(ʗ=1:|?; 6(ywu%W_G_Ε'5/*u*yZ]/Tھl{~.dž Ul}Z63ض6ƊObvT#Q# |;\ | j<~5۾Ϡ}BIL[aFX8DN*ls5,HdNV} B Qk Hw0tg2pp jH!qL5Xm>Sh]h#k)u``X?n/Lofq 7_dn}?:(O /(t[ݧ肸@6Q8!f}ZjRJ 9 }0s+t,PxkY8悱 0 r%H%m! HH(+ejJΐɤTĔ6LVAIEKyI76T rs9_Y";]-CkqqlUywS=ǗegFlHfj`ˢFw3odޥlPK$j3E }hN1 h/0聧Ofc=G<:'Q'#;̸c>(aA("C<{iyU1p2r;h1hJH!1!J*@ c6`32 ޚiOW^7}b8ehQ)Em[vC@XD/EA'NKx@L )cet.*UqP)JeV` lap#JcE4$܎ޙ~ʣ@Ikǒ' ϮSIxbkW;҇tD馎RWO6\F{l#ٱ$Ǒ2_U 2 w6tP )B1sm.VSd?,prdv#it,"`uo G{==oEb4OMw_u qn{#eo-9zuef Ol ߧ?dze]?~琒W=~YRi1.Œs>3{y_&qJBmpx/?azU Z ^aƩ$)3(|a7pM`Ef,@s'3o_8ZJƫWϿ`f/&ȍKpj 75_^K/K?X`y.nLh2Ԍc_2ˤEŜv~ ݭ4SM}EʏMS`j) ȫ #0뫠t˪urcjN*||,\ld`IV2vZ;@Te>!}VZC1 g~Fi_3Ꜽ}[~2_oq9ށSe|3=1O-3Py9*g~l8Cs鞴]UNeV{q*i1hSފbZiA"yqnGG׸ ѲӮGSNLD^ʌlEj5)jU^72=ޫ\La!:=eH3ٶvzf Nm)B(`96x/ a''!DQoD+sSOÂ"-z,zIS óܚlyT\E\xk9RJ)}ЇIBC2tI96NJoom0#,L`hn)1H4] I:FzT/z'k?3D2bE76R̔WL(DKj RNx{ap Z;p 5qls?GBzc^,KOvky֔'փ;JZ`}( uBTHS9ΥNy[Q } fL4E-L9o>j=ŖYQξy$#&:ȧϒLs3q M V0EN93X[ejcjC[5-) XDﬧ.0ȀX%6Zim 0R,jSKFη{(r 9&x92J#XqGkZ 60jh}:'tdM7xgth-\ټ v瑠H֡O\K7v4▴ r"e̙G>W,"X"5eLuU*GgZȋ%'%e W.~=dX(:CkGbA J_Cqfjxؙ{ #3T0{"@*oZVWT}%^nc.w7דY3tZhwKn.׎IJխ6?S5q-rFs;MjڳI3%:X wORQ95|{寜wķZ2 3)Qn5"s~ٹVVYV#Q WXKSDB!XUR(bNUnj x$ ) -zg՘ 赌FMFSѲc5vtkVn6SrJl?/|6:3hyٺ):d;Mm)7ZN?J%<v~E=+{Wν8%Qjג)E 0; >1?G>d>6y)Ka l>=0y{E[xSWXoC!fH |rVk1`|b߉n?Vjh*7XxAg7![T# `| ڥ=,>@.lnb{mS_ͫp~%H}S59L. hL80ߦdUˏ6Oqjοd_D+ol+Gvnz,HE_FW45:*IટMeuj)!'WOR1 cϿPCq, / _ @lptzVl+,PW`e < |Q@"?E, TMZ5]`ۖ[ \rr煍XDދ̪,_{\V aA=Ql\\{fpZ.AOϳҟ3/ag ڮ9I 1E~s|2z~'=~1,Y>kZ0LrJ͙"<%^0;˜̾O(HV`Q(-vf#/f#670|$ÐI,XHYCtjY氨N&-@*4\UL(S wnj`IGAc0S8 #BkF(F vVthZID+tBSD[3/>x > Brȭ28f`PbGwB(&/:(v$/킶 'S[̐*LxGɜ4Is%c&(\YHZFbl9 Q0!S&aJ;f2` µ-z+z{ձ|Y=";ڋ {44%AX&%)dip݉n躭顶Jr)U4J@Ol/(Fi#u:PQR5:弡SvK RsBGRǠ`B,q> GƨG1z!oV³6 0*^^ܭ/2rpy&7N8M^4A̧ar,&I>_'KIp$;Fe^m@~g~]x(bˮ&fEO33M+fқ8.4 R*$Y|_ҹpޏFQJ 0*+PYA5)?[Hr͇u/aAn!I;"V6kMjظ% ,Y8R 95/c:{{!XDuA6sK|UпLf):OW37 Ej]Hgo( Kn701RL݅Ní]Lg^Ϋko%P0 ֎! H ˥CR]w+Qz։0Bb2~jd /j$b(ZUWSd//wJl0M~@J߆7A$__,]ה/ż\:r&+peC"cl sI)87wxp s M^f#KCW] 3eˏ/7^s\ضgi|>B* I9x,2kY^\ 荀e#&jJRcwqvqR! q)[bh\+R[DZH9}ln|^ے#˷7_QX}\gE@(NL. `gݗY9 RpE 4Rh'8M>T qRM9QJD-fuwTx0Uxft+Pr4k$jgŮ)R4\6E |KN(+-߷:L-1sm1O$R\`D99?pt8H!5.Ӟ]v{ \PϔX&@ ԣʎYH('X9x 3X5K8[5)bPJ;"se @d1,;ڍ{m9ϬɞL6jπא lV'.L*qE9Vٚ}D]TTɀ%qĨ|Ѝh@ JF~~% x-콢4vx<| ,-c]]NHg49y!6*Hgx˛nGow]df-?!=ql^-D='f.c.*itѥhUYC5NY-ZΪQň&:D[`P-qkΕY#ՀSRIk`cT+F_g"XHmG[3F*$EQlappvm72>}|I7aK.`z>0bq[lMbs%ToشAB!=mc(Sqy8aF9&q!S刳VY˂lCvWúufWS*y{Xyber1bGIXv 겞BG GjK8HcA%"EBIjɂhd(^t2ŋ.Nci*r6$Nu1lG[pߑk588|"E[/XG,04W.Gb+5pyQU [pư@{BuLk39WB)'ћS+p"-ߊW?y]E<[|5/"&n׳1&*6x@!hKm/1\;.zZFk{Z cd-t<\G6z[iTTBO7 e"^fy)^6"bnRC.z6KzP`cpCZkN 7Bx]9vnbׁ >͏ Hno==:e/Ygm|H1(W"BGnSu9*{Lfe}fì^99 mLϾfHPb}J}AtY>'&v㖁[  dUƷ*bSPVU6./Uo+NVZl= =%[rۖbhdooOi`#@S;TꈸXۂږKXXe&Vi\b X]zչXܹX;kb\sv.3LbڹX;kb\sv.ڹX;?Ab\sv.ڹX;kbVsv.ڹXE\sv.n;kb\sv.z<Տ^5~˨^_YS1d2@)g_.g_*½rK)SxJHxMCRp=r{Q٧EY+~cTy7p}C vZ(2h;/ QaVKbvR~}00ӐSOe_Ű}7Gs,GU;k=j(ۺʐ&BF[|\" bbcHrg Ջ]?DcD=7d9ْw{!{߁\`1wcMŅbxR!4(ʸ!d,un0c͎LML 63Ԏ^#tY b,S r;$VXb]4VQ3q2y*r0)cEec|іLjcM'ַ+/rm%4^%o[1^O]=fW;˽#Ol\O>Xp-eQC#H)yJ-拊ƴ7:'[|K免+`ۖ|+'][Sz&TO+) Q(Bq6 螦̜W%{^.ˊΦxhe}$K\Ȃj#)1ZZCRlJʹ.e[[mMQ$cS5t5+LDOl/^SjR(&zrTG)G"H/)I6h6 ZIM!y9Ǒ딫 JZ)YU;m;NeYh;qN ߟq..-Vt(:,./[kؤ/F뛫'?+o_V3TJ˳$^vI:yzMleM@01yy}uw}{ w`ͥ]J$ZƽE |K\2d}lyĿ</?K-":1}hjhqS37'lgmւgќNo-NwƇ- ݝ$_M>_q]tA7nQޢnr<7]w&Eȳw>+0߻*vbRp,#{> UU.'N@5b$@3 QOE|]r6q\LSD.6M&P ѥ"bm=T)S Eg+ sulB:6/ʵO=YG& x0w(G2-8,5@2i*‰ޘ5~wH %^V%4X{~S-uh@TvBq`IY5tNz}]%'zUW6;HJu*`2&[IJ~ xCىP;JK @Rș E\WG-?%!;E䄩2\8<(# M۳0ܧA>t+ޘe~1ƬǕ%'<8L'%onkZͱ;sr9妭y VͶ" q;ܐȮ+Yu^Gg'W'T{Ǝn+!/[zM@BT#HI(Ac;n&ڳoJ.Z R.Ɣ .WEBƞ2r1Ulv0/UVm_? ^n;ڦ]Yb)x#))}ʱ$[UG`]B6cDcjhu]sG;)ii\3b-@WM<9WS"9>FmmXQ2ZTgj۶_S7N_f$tэ,y)ٽn= )v,Y)[NCstS=s$9g9R#gJ f4dbF)9qWg^D_ȚwADDMR'p A]L/+#tɣœЫ@*pTc;!H\T:;$Rr4+q)!D" SΨVz;>Њ< D) FD=nL;@J"\nc;O$z5h3M頸u/‡ %ޢN2望^ۺAr \_tW)kʷI&`IsXhq"8^Q,JEDbF|zN=dإB͐jX;s=u.5;IQ5);@rHd/CBlH&:}h&鬼8MpʼndD]fO !)u!ӻU k}{TCo+2$竏WUjW<)gҊq ,Lp.n'vI뭫_)F:{6ݓtHQqgFfHHI%#N ai{$}kY[w]bn}__zt6o߮r|u`*85O3;cS!ZR8 xRL[O E#V!]sk_1(^|"[d ABhfqCN:!DvPj BD#94Wu 唜)ꝧS4Xr:`?σzkV)OɇO!l%³Es55>_<",Ox.Fr(wU-E"c4)0%!q΃k{x&гzzyjȣR4 T,6 @,+ h$`q)zy2]waqR1rp>2rHαkce\"a(S8#O&iaDսӝdQJY/S:)8d2tP"E][vC0-/uYFGLx@'uJ1m8)$Tһ>ؤ$X60~{KI(IA e&G\kRpb Dڒ1@t(4#/g00U*mj|KEZZeLɗpJ?&ohqjsz=Ƚ*ˮjnnr6oA*''ә?{?r9 ڽޤ{&6qΰajod7݈q._ױoi6WK+ڥ]++ݚ YRRZj\szUGBj VռnǠbЮkepWVblW\k]F|8p GquWNW%9qT&*NGL㬝5t6=j/jԼś^e~(./ٿfh`& 0K*s'Z;o[Ch /j:?Ow[LYry1//;CsRZo~,Bh6֖K_Ps~3__-'I9ezĝV?էiZJ?K ] ;KJ@֫#+o _~3zcETzئTkk= Hh*͠Y`E-}4gQǫdQe06;]l-*2$*+`YZ+^fG#GylzWMm3|=-4Z10ӑ~ڲ.gZ^#QY\wl66{8- ~kiex&{Qΰ lM41Ӫ;rkn45nnXuq~kO9 &z [ix|~7<ױpoZH@7 (&aN!*S VJ|ͶosNeO9#̞v:S MLV}CSZ0 sBA$hb $ Tqz ة͍;kv;N>C):ICqrۯt*OiF8.,h,?ut"XU0cx!w$^Ssܧ|"8+⌺('#U iN"{ ZODTڤTCQ:q))oέ@>EI !#zjA y&o,ܝBgTE>D}8tN}>]9.LşRh\R%EN DZOiu;VTZ2A$`V;9@Qs9 i㐜M1/J#{F~]")s ɈfK(ShSsY>J8,QД @j(0ȓ u?FFөKe#DKQ먋\8 H}2:́9?Kgܬ'd2K ,~)l)*r][eYre8bfR3&e#TG4a4Wn݋?K89ˋ{ 5ۺLP?͞geV>}ӽt˳a|~Rr ;_UqԥgBzIݗXzqTl/^6v ぼ}t64izٖ[ݶ"QbX^Û܂e :4U_uG@puQ0kUэcv~q[-χWٛÚyGY%/feS^ǟ7ݴJjMݺj[۠CزGJԄ15K/ڣy|q* MRf 1ADi$P>M>+2mJjvO9Yym4C)|<'ൌHxt'Q2(v*I+o'۰h5OSR+ M.YͮXl]|5DZo,'U" l޸J}q Yv!Ub]%؆n%Wdžۉ+ٳvj8pu;)--b+]_=}+S7pf_*KŮJ)HWژ=+7pŕ|_*KplpJc z\jo(Je.t\(8G?I}K LRi> ˉS^Pl\ёF}]Z.$e@Tl!Ql;Ke7Hh - JȤU3 *ODբKvJ~V)hq%.azyx" g3#D*]0!'QMf𤘶".y.7xjg 0CZrdYB'$UȾzn7K^܎JvQ/ۍ#w1E6Dj D$7ziR`*JBׄ L'g%ԐGhYRC{9K:ٻ߶eɹ NOOдz`ndI8N!Ѳ^)[vY4M.ݝx H5TRmD$n0v1Np>x`okZ%21L(SDG`UL;NGZNw\tLYC fsAŭq2rfPRGC/ƿ^;L#yw`92 { /1Čdp2JrFah rI9j `4~aF;U"'i\8UD,wԎ`AD!\k!Cz>Fq\\I>b,j}d Sk~>nQ&Fl"%1!FehWZuDE2Mjݓ=OCܦ{OU>ݿ:O2"ϋ(]"ԻdcS\)ԥfR~]n1ڋsdt$uۗe\}{SGL@l yۑ4; FG7etdgX:Ro֑L{6ɸ8?8Y 86>=9ћ,Ai^mLlYز~&Y]'?}~^]LfW?}*Ĵ]L4Jd~LݯM_0 G/y\ Woh|gk.)EksfZ-MY9Ǥ̋4u! SheYlhS[?? :o;C/o M1x/6م^}?eZJJW;.,N͢KN'Tٙf7ߌo_l69{ei.7g[F4AV0ҧ]+*]`R,N,S;ikƇhmd9bA%QZ<Z><ŸЗ'YԲkM^א<4J:M8s:oԝNW߫jʀm|>[f:4&rՏׁYb]uT#kjhmQo1oc`jԥޒwh,tk[~˧‰ovg:y{],sJfZ9}#3 .sS4s~϶=QFQfs"ᖇl$ 01I/OHFxP0ᓓj7J"C9ϩaA^'B/@ϓ3Eu1lm\==8 {17m;zbW TcfҶJ+K+ퟔC`IpI9}QWy"@'p0LR%"- 8NOAѥ0CѲ9WJWso6f G78j UXGN#IRa5Y)Eʴ-yaRT3P j4!1>,3BR3 B =Z _(KO\:q'o7[=)7/7}W$tkw׍Oԗ;JJq>>;)os ,( c.F6]j` |J1XD0F"7r1)Y6:ErҹnH b!)\A)B\|*"i|9{YW_5>- 5^ڇXbJ@#"C#~qT`+e@1GH{>=_ =Mՙi#hMQ .Rv$&lB1`R괜g AM;= 20ny>g\rЈ,:/򘋿ϪϚ`Լn6)o3y1V ^Ri"eue^mWNO2Xx^,sE!rj?+1Md47Cp?tȉR4gKro+LN8a}9a};a9a\b"zg=8m2*JkX`Sn P1^N Eέ"XqGkZ ص9U)tL-ç6~}/ޟͶX̵JROzKrF..H0HZʱ,'RƜysŢ-)TU}@ՑTyϢioh(p6l) G-! rH# be}0z D b FрG`m)c"#/Ae^Xr?_@CVGsڗC׍е.|jWn'^L15{SN{fwyyε!6h<' :);yʂ -`(3*hbQ9b0XʔV"iOqب#?'x<,=Z3n)Jr][cYJL0:e9ꟜOw`9TjrGhdRjsc7Dpio9rGn9Y(,GHHᨹ$tVXKSDB!XVR(bN)!D!eeZ30豉 ܊ɫc9{Zl~ VhV\MƃMX.:fš Uui3kbߣ%ͪ2me'>@쎮k-ɬs9q [I3P|b],'Ul2'5RjamEvmm{e.]6=Zn__swwiz/2ztGǵyzޒb4opbs=}kvoO?{Jҡ4[[o?mms?VI2RV! R(: Hr"ʙP"7 OQx88>wv#;i"(L0S8 #B`2B1Fxh:+ؿNb@B$HpfT[4O1^O=!XG1b2E8ؙ ?0#⁛.\.r:"O4cb2P%<@rFϵ,w 0{zSӕn5IKXh]fr8~R;}9KhQsG4ZT7vP0vY4a#A2?9HysQ b!&p)Duh@~u6<C_ t<^]~lC$ˮ? \9xvUojn~}/G&Z,W\L|X3 _61?Rw}L_;.ݼd`D8RoU=z H1ruR$W#x<ɾ{YVދgK5}8fFzCbJc0R9c6(L )&=s?l.v)+BD4:%rHN |T&F˨FF;BU>U4`0٣jjryWyބs׌ ל];P 'lr#@ G%oT[+n8u~?PyC:<Nc* IOzn>1xLLr?{WFO; wٻ A,:ȒG_H"N"ͮ*>OX8kb:t)`tQ w'Oԧ86Ckҽ`ߓ=azsz__ivXPe:xu|=0_/pshuɼ:Dk$:Bo-;_yD/*jG\ yIa(D@n! 1)',\6Zbt[gp=؃/(3{*j!;[H, E65sYwNP%ZD/(ukF%$2+@-td%ql[l49:'|ډp+*OZ-RZ/DڔYK 2$(S D&JRKhab;}Y&Ò/_6WU c E?ՎoFwꛜ򟭲T9%3ZԩI'!KBAltԾ)g/%c !![)u $@!A'r Rĺ#l %$VDEkШ|LB;]+'V Rae!rQYw6!yGx~.Oo>Sׯ|)2yFzG @Afr%l fJ@>T(*ef&8^"3<`,r[6 gGb=ʽ+O̽uZGryzlņ݊i#6A ^x͋Ϊ˝w9J`1E,dIhԚH$4 *U/-:K1+`^fZR2 I:K9JKQI;d {1VXpiDK;엲V] ֆhN jRI"*J9S8-ks\kiEIE)/)YKJkH!B|ʨQU}b8_= FJEjLy~hS)R|2MAҘDk9W0, enfiu\F 4,|T[P6@|L ꓊2zA{_#Sx]A,ARb28tG{QW~E*ӆhruHE;l㣎H&aȵеoюο5hk\ :_h8;aӋY6/[=w3S^5El"Fو2̼0RӼI'?YX,(ȖbV7H'p&Cb\aLԩ+@5&'k9P4NIR$Wsc U>,péVG쉙6>ңLY2f^i<ᄌ,[5Z&ug"x>/}Rpԯ iNS񏓚2ߚvZݾW_ͷ5kׯ~Oӿ\t9||ڀ;ГEx?>mQ;hz1G+Y Rb Y :cW-@V'+Eksy`_$v[k??j b;?[\ME-;h:7g|?Vs<d'd5ԫd굶(O"y-a{yF6^7c/eEeHJȍ%4Q0 JeӑwhqДgMyͶ3P%,`Bdބ* lU6°o) PKךjrqħ>l(sٿnwr{hfDjȬm&W)C& 6D.' bTlg. n#($S0 =ېHOBiL YJ]l85x- x2ʦյ/V=Bۜ=+r9=yAE]od"?xQa$Fhb0 JkQGx(z}cs `*b2= P^Eȳ;]IJdP`0l:ŝ>&kGQT c 5!  BD뢴Ά#s\ XcU>Vvjynz uw%nЦ0 +z{?=aTy[7ܺ}ں~{tK6[O^g[6rYrnOzm6=i+.ky/}MR1lgx#s12;Cngq}wXyYwelF}Tk,cfNh4Pΐ&XА"S3E/KdC[#I+r،]`#w^.$n ղ`RH*oUϐ4$S#Hdy8陮GW;.M/Awh4ґ}U[+ނ[,xgX6z&)b6MVgY@Lkd((éc[$jH[mrNM^bУ;*Ѯk-v k g$#v<#!((*Eɿ}6eLP&TJP'-5% <Gjjq{͂ϫ{ڴ?8l߮*w-m~n~nR~[{Q_~4g.ɰќgkH6'*^!Rw.#z!J% P*Ze/hAhRMC.m[CI"d_U|i5e˲t]#]dvdZ/ĴfdFBDd<^kxi[jZɭ$Ծ=,TBXZL·QGeL( ,qާ7#}@)mqdRg:] h,*;63h彳ѝ7;}U'ql,QQ2/Z:"9 !aȺV6 nDLU ʀJ(›5D8ſke 2A"$E9`Z:pWT:u]&zGV6Y-n3e%7ȳξby*~zϪѠ|&PI &-f E%T(%kyC^AĤƫ$=1(2gK%FQ=ƹ\HJU'm.>(R0X Ca8W  OW /.+3޲iߟІt[:ƜLƒUQՔ M'5%IZ/S ʩ*FbT&`g[ضlKޔDE$jj8#v< s_P{0 ^_oDnm ƄѼAd%1kY!]x#Lވ AhcCNkffVtRdk2S*#0qd!梳HlT Cjl8agԯjrT`DL>vEDaDx416.fn)5!HG`DdUDT,3F3!EI_-Rq&!l\JlIK*XTFNo:2.WM9슋j`\#.xgSobSdA6e*&:H@g6udCI"損LVG\| \<L:vCq7<|g:ݕ7f?XG~\|S_>@Zt>u:ew7 WM=|OaeWqʇg3`#}ep9G |wZ]x(izэEGVDJ՝utpe,@'T'b8:HnW/~-߬Pg>Ja>>q}4ŏZ}8Y\Φ|ͷm/M݇Zntt=lX痳PkEq٪-?cNK=xӍ o+URkg-6;-.)QLC]R &gK%1o;7E6>7n|y"/F!k|I3J bh`2[&==Y_*IQG VXśw'WDI@^f%lJҖ*E%$0:)# +W Ҿbͺ&սdu_2ov ">vx{_q՘ a_jt~-\m} :C5|Q(hr֛wo:먯ok?-O;ncƃʏy$-?V2k̃ӵ8uL:I{9e Q]!_8XayuNUã*G9Po@A矾m^!h\h~n#~n&هF q6a+j?oW횶 lK]SV E >ހ?q~P2I]gGC5)H)H٢P>gQh@vSVPjK4 !28n2Ż3d jYL6o^Xm4!CZu6,%abRIm-"Rd_|\pQ)M&޲13d/O N5Q)%܈Ց(jmve~ߢзg(QMOL6˗/z%hFo,duQ2] R k %{tb#ڲ3:(ZɊ :1KYF d`܋}>vł{B%_e&Hځ1>3+E3SZk Q$Ϭ1kRGqiz!t*J9tBxUlr'+*U颒'=x_3s)%FEVv/O`]1 k)uBxB1&L_].(e~ ^kB]\p4,wTm_7?kt: Ӥӓi @|tWU}X)˗<ך 0UB(]i47C8`DU6`2V9LGӇ ^$\w1GVuW+et{us9A}N]!w󻲌ŐՓ*/B^4; :b:ߚm1oqnp0icx](N)-ֿT)z0_ԨӼ^WCB|=L7/5JJa>8p8_2wvH.?[팠%! A1Pmѭeùo+ 7y,EM q0דYB'UD_n_"{||/&*8k)˚ƣ>Dסd<]բ'N|L> ?2g,\֊3iFnLg<6ʎ2Iih}ƳQeC0:Tw:ּfض}<;Sj]w{[Eo'o WN x)j瀲Ƒ-㘋`0hĬ8 i;}jT YP L(*Y{A! .cj$ұϣyg{v(رR+/U]{v0 O>(&}"|P*\ZEQHʶriYJ2 p3 `ln m $Y%mTD٠Em$Z>LuJk'  %`%Q|Fh{#zHDD _g@Yo<Ҽ8{#:Z$M~ #K ЙD1wFP qnRQSGSl a5d('dMbp ,Уg0*I$.&p%꓊2:&+|H0N:[B(^REg!z=9G^c,IXw@OA#+80Z(ZgEdY3!6G#~ه1st+@-lk4r'tjp~\y{̎=|F-^ڈ#[^&E:o|j=;~.vWZOe6hB,εB\hASBZלUn[ɛˍ} _-JgϤ]4T*Y)-(\\3fΗ CA9dQo)zpY儩8`3IqC ~{j/+IlnQ{PYn-=g͚Y~VYJV]GBF媭yt1+"2^isTo/28~1>wiGi9Ʀtu (l ՞lhUJ ᘚd+%Z(&sqjF[3|-"P&hDH)lE/g^) =#i.q CqFrZx[(܆#>f7i-I [Ŋ) lʁ PcHNn-bJƂk訹iʕOlڪC뙋6hSN<9C,,96mug@(iG\)A,J/ؚ@.tT[,޵mlۿBm y? }I_" #i70X(bQHpfg?Ĕ&Fm*"'{̃Ykp\g݋;Qugie {^B`-x_28[ (U;%#e58[#/xKnz71pZnYjğᾀ lvj~rak\ p 1'yU# #j~aЪ])V'^7>pToyy~_)0[dz nlo&[-ap|C+ɼzRJ9"e<d Ңg-Ͽ̎Nы?%w#`]Epޅ#?ˋU~)Vd,_'ٛO/+{Xhdyt8^`5PEu@^Ndz ޼^f\UL Ǐ:ǃ fcv5^L'}݇}ll Ֆ2=e :6&DsR\̍4wI]IztlqvYǾ]dJcv9hNnU/m۫VCT|7ݾߠ}B(e2 CAVB9P Ixvc`J n ,bɩ)&r@gV1'XtGjQ; V۲M58[簑uUmJ]20 1wf0/3\ \q>M(gE`-+aVk^a3$ES!0˲fC/ӶVQrעpU{b8b0%T᩵ -С( ,L1fH TDjQrC:$fcb=LI`T/,V:Ag-Ckp8~(B3 lD*8f ՆnfpoFW ӆ?q: q܉GH@Q[/K]uߍ4g*JB{U0 LCO=g|rqnUK?>7۪|۷0&}Ef_gtr/_-Im<8;=p qa-Xx8.';~e:r`@:yf*#xO'5tu0j|t0Z8QKJW^m綼?74O>ѦX{cflQ6iun? O"͋8E?wd _MmK]w &صIݬ [͚->d+@%UmmY-/ojhݷ%Clryۇ!w*ױȟiےev#=r Î;&8~?YBp\Evד?ǿhЊKpybՕY u/泓:-wk,+{u~1W$f)+ͯЗ9(7xqHsպ5,.Gqv58jXW?]+dl7vO+f?}[ݰ"+.y6*íz=î#~X"b=Ev:Od__]Lݾi**/5hloMR2`T@PN+G_pSEpTES5~uvIi*M`Mg]{ j3Y;_oL=!rtcԜ(@ku t?.G Bp!dT~\(HKJ_j8ܙ1Ԡ=t ?,15v}lv<,!sqO~ 3{dL f6r aik ZZ={onϾ{ p1~Z wv7>2$?bRU hyח:* =aŻll&fV5VܑE+UNMAaD v=$X~YOx7_T9ݳzy tR DKt۰B>*KSRg:ʞsjK-xt;rQ2+:SZ0 sB4r hb$z$\z8=Bz8}yw* skscT6R]J{sUR0}IB!D5L%CkL!ky*99%}!8z_·TÞ8ېwcq8g62 v0tc~cg2[lB/bT=_Ґk^:#,}MFqՑ|1s# !J/OÙ:mбWuLрsI> =#7=/ٰYfZ-\t__4=I}Q%,%(ЂdL8A6w1yvkgnZvM >ѝO-%.q"u=*jGliiVmho9Z7\a#gdT;_}Q}&v?P f_%|4׃Ղ9jl>X͵vkҖ uZf[2h)-D&$`[Xv]TF"!Zk.Jڻ4Ն%DWTBU$ԤҨKL UQªۡ*jKrC CUڟq'$`X2&Q juQMbiҋj()KNpN-gt(9J35M4V&CWdVRutu8te4%2L)@{tht(0S]Lpm2njr:]!JKWχC/ aT67k~ZEdjoRu[6+ծCO&DWxiV5BW용'+ U!uRL&T'-aF/hdXq ˚V? U\l"LHjCjRh-Qh$I,#N òa}d&G&TX҈RA%DWXt J ]!Zyd.g\R+Dk:IiOWCWJ*aRrBRd ᦣ Zh Qҕ]`BBBWv_5ڛ (f8I"]+thY+@ɹ 6S.yBtMGB̤BWVv#QdЕrՀh*f+5+Z'JձՀTOW=Ւ]`!ҡ+x*th-:]JIlOWHWgeJS-N\.j0!%Vu]9ZєF֖t*04*܀h(սpr72]!`k+Z+D+; Jz:@xFo+$B" }lYt( MB% AUuIּ# \Boғd@qM.'RzOg,)jQpMETC\w]TC7,6~lNJ]+Di{:DV2s+T*th:]J3$] ʠJ+6+t*thu+DicCWzˡBFҼZJZ Xhn@W]%I(dﶫFp27B)MOWHW.D7lt:ںyG PrAz:D4LFD]:#j֚>|%MSƓ)hjWYK^AfJ5(M-vٻFWCu~D䇀|X`<haՒ,Q36%fkZvH"Y$_fFxMĕm<3A;PfjF/7a1܀KYxh={%."Rɑ-i!CWrj}-a(:AbcEWl-ܸur_'(:Ar 5-\BW@k=v(;E&#'8,hԀ㧫X`KZj45b׮*)OxA>0b,lIh;hR+QW pl[k]׼2]6eCl#C8K^X]]] ,׽Bћھcs].tez=hnN;l=@n ۸T$q[Ӱއ qjw̏_../p&mʻWoo>|sz#M{A=jD㽏2/zL:S`?uv/']^^C?|>5ߠuPo{|yݦ?;zGMb w.vߕnR|vo>6د1싿60_LG|m=LQzr'ծ53BHm߿?WuRb^H>v}~AL ˢo|A|^kyx>+ܧcu-7f35`']2slP$k ޴ &dr$gUn:[BҮd /7]oHuݭn?ݶ.mֻ{ ^߮/j#\JRQwW#гn*T2ؒh13EjIfUktNZBn3lا)՚nq`Խ.~9N<.D;e=uJ[W"DOoɤY3ZD3brs&U͵`0&jڠ5m:MٸI`cth _ARMR̥fcn5 kJMSjJ1'Zj{F,=:d0vcF4Ccv..ol.HSZ䔔ջw@x"5"{}DuhQJ=@ܽ\ʒ&S$s*0N]4 4fT{}"fEa ACw@xhB~n67o >MaYnj(`ɘ3_ B>5yshZLM%5xN5thncP9\:'t SF0'wSt-jI:5K)%9o|0 46d# V\jH) AFUK!Pj34qzUIOi;k^,HU.C2+הOtOV)Y0f;DrNU.Y{V+ RAvT4JkCvYwiJ2U(_ \$0`gXc{5PQAm>)h-a]S68Y @R*C!VeWGWHmluXBs+]S`1uV6 ]K ƺz2?s%Ps l[([q%I=Ҽ1 k9qmE̡ m "׎]SPP|hMA'SX|yts'.U!\ "U+5֤l`WP&4$:I: d> A\R}S:n3%Ce:|=⁳d2 V켊޶X e=ಮ ++ȸALAAX'-P C='X !a@YPќ=4vҮSwD\*:#χPɨ[s6 : ֏MO:M;*q b *LF8*)S|B2>AAΛ:2];F*ACM5JAQ"Ł.0͑&A(xgIywD"=dH_(62 Ro,o+WaZD5=() En4f32-ƷzKV#52)ʚ9&dYk\D d@n iDjE}B\FФi!:380(cr3RUNJYdFŘ@Qb i5 !pB!vlߙxb].6v=5bÏ^tjl{]`!00 ƛAy@y>o:@Gֱ*=$]I,QUF2bx蘊'8$;GEE ʃVs I"92)e( .N%1zOC }$H֖֠H#38hGԷ*YȩՏoTC}^iyǪmC5YK!eD!v*"}OxQ~1di*S,7XktdD0Hc c.O9t9(P"Q)_C݅Zc@ az;H Hј j`Yg}Jh B;ےڱ", R(.f)4D[:jUàef@ 2q^HmϟfhJ7a#FAيӆ2 a9#Itk woh `=n֘M5bUv4 J.ȎYyЬwU*8I4M](IWKl$dip/(繍*?!t+jp nG}zAAklU7__ov[{ø#Tʻ&x댔v-gPG5j̵FY#g); ގ_vbnu]-4avIcG/-jפ&`}Qbl?6zm67woh~yh~3>c[OZݍO\=ExU<vĈ7mm9]^۶^mV=_9ޓ:zhgeTam??7}n’@\` Q''8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qt@1ң, #['_h (ݮW@ $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@:Y'Ш- b@1Kq 4P'):9V'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qt@6%988l`ŋqGJ:E'{c8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@rk=̓[ꇟVxsÆ/_@Kkn~U6a:^qGegN1.m8~GF'a\larjՋY6;] 셮NP$rMR\ ]?~uX:A.Z]X *7{*~ H*uAtbjz)t@ U]\] 7낮P,KjtR q*>s BStu`6tu\^C:tuJt^:: zAt.\@jFX⧡\\qj;쫻tU{xxUQr[8O?|.q+wm#Yۍ z?4vf3E>tc$hZQv[,ɒM[b4qb֩{n|ߜGŴ5we^@\yZT|:Sʅ>x 0'Pa@ |4W+nqKqG+͠DwSCҼɊx MN7Y3Fzs+_8^r# $g\Kr1<ٓ*YG4E-X#'Ŝ9s˭JO32ղjdQiR3EyڃDc g,]_gkΈOS3o7OJ@ kB]}:2{0OᗷohZ%Ps03<qc}õ_n%%-RrD)QkLEZN+2{JjLl]`FTk*e-tJztP.iNX+L)i ]%5z!|tPҎ^#]q-uVH/h ]%*]FI&1UB+ȱUBBxtPMEt+ ѶUB{UByW4RV=tֈVcRꎮ"h'acW{>l}/=w絁DɎkAUT=K|RFBWKqtW++BdEtEƄҩO- P2J;ztE1Vג]]ktwE P00wJVII݇auQnUEpvJߗ'ۢ 5wWL_65Ap}c9vt=fa:JFf rrvYcHvIO*Rm0@~@$[S3A_*ʎJkiTRhoe'9Oc' 4}AP^m)n;36/OTٻ9j00ZklwJSk]v }ypB6)K,[4ޟx= zZBY;]s^&<~\BWb|tP2++Et}$AJpn ] %4RiVV̊Hh{W %CWxǪǧc-~8^ bĵ\yY{@P#x]=1V-tZZJzl^TpUگ0*ɇSm+ػ{;Ѧ-'Wt5cdXW-~,xI+}_#ah6WHCsKx8~ I?9 @8S<[UӾxO#_6_F?']TnQ~S2M7oؚ p |*K@"!z3])erփX$l_|%PMwv&X卍3*=ђZ8^X"xUxMtksr.ԝ)BȍvrtŅ!\d =!"}j"C()qJ7DdP Ƒ`j=0T^G⇦fɼ~{S0٥2gX;hnX|z8iya_[FB[M-gM*]L/͢w7T Pm)Lfz%9W'-մluqn￲|sH>J^:q8WzRoӏ c$9ýO g8x+y@(N\E+UL#n)p.\t%؋[h$JSΨ?[o$ۛXG _?,bEm@Fk[xI~8A({ Ij^ >]|qN̳e? .qt'/{_4^PNѾVwL] v @ U!NDϕq b6Khl#߹" ARg92#+unPLS:7hddO^o##]&G[_]b\CaiC"b EӘ Å؂hPŽsAZ֨H!^FcD2Y+냉K x0+@m)c" {yC,݅5.19#s'. C9fQu_N~WzzQ_w'crcPJ+ yε!6h<' u?h vh m 4wF؀mT 5ШC[Z?hǓhUIXikr:D *U{[=)R&UW_\&MmT4E)*NYHYjNPs;{K߀rTjrG@42)Qn5"Sϥ+GutQrG#Q WXKSDB!X.Q1Ŝ"Ш)C0!BeZ30^ˈiDp ihl ΎʱoA5t~nC3In|@BGż6es +niXߟgrgYzݳe6~ٽAnVe&t$Gh{)KU m9.˟RUdH] v,39gֺ{>||6hk)CYyix]Vf>J뷛*}a^wӼv7N7ڽbRה[)B>6߾NetZk#˙[DE4$v_"ҟ^dR,͚C/ ?}aE_1~q["ԶlsϒmD>rҥ пOtdvi%MU_͇peX^ubg9hw uЖAUE=֍vKu )\{>ُeΏ6xnU꼬znvӵtσeNW<yZ> `Z?Vy[c+E%R5. `?9I 0D0,d JFa- oGNvAtAFq'RD VFL\FFzY:Ka#%ti5a18;N_3^ykX$$}CR: i¦fw[[_[g]mP]CrÇ 86fadɀ yAW /:\H;Įt&#ށ DJNcR  3,* H+ -"hA xX}ciP`a@b97 X١.揱 =x{1rp('FB$;MbhmV0, ;?m饺].R#iNA;нH5ӁjjSlC4f[:,U:V"5'{.mTBXc%GӁR2F8;l٬mFo[Ed!u. f_+ >))dEļSg6Iq~5[Zd;OGh(ӅK/ebesɟ1!Y72ܖc~ӫ0`R!zϩK~)EF2_ǣ2U+"deXQ9Wa"ɧzB)rK9b`8Y.SD@ظ}>4XY$gJ{I_m<"/^׃߼EDfZh0iw$ %J!`UqZP#S!AS,W_ A?缼ي-uۛnUm%iE"(^ qxOV|pegi?z5'ҘbBVɓ Yw++Uk >_gƟ.JW%F_jXz?l_Z-?⬍o2r/;? ! :JCXSSwչxW(L~f /Ee[ޜЌЏQ22d\m3r+@-V,`U JO-V?c_al!*SqEA[Ki"iRM9 52:^ll^3wo:@tn&UrUJU 9>nPp0I>J_(0ў{eXR,mKZ٘lt2 `zJ|do.g+bA%<Uҥx༁ oF߂.&@} 8]*9',CjtsѪYn'ޙr1^,U2z`-T J9JC5PCĒ&UwOz_="grA R*PXlV[Yw6lY@S;o5_n|w}YnKmexX'FT+a{/5u3˚Zl5|{,◳c> Q4vv0W'{i Xѵ#j)GݱM&mE&;(,ϑhuLrFԤ<`Aiu!`zvkl00#wD)ܾRH4kSU)"EXL#αnϩ}J1.:C$f'U\,.Af*⢁KiڋԔho_dH$A{rBP1aC*v+ǺJK=T7=yb=.+QIjU'B6s\ӎ~c\9j};]Q43VـG)js\rFTElLBWdI1R*jocV&CLA.V @`ꍌyD@7 iƓf.X&,|Q,ƌi}p|8pqț og_f7& 栋&%ᛘF2(6M > k`6 r(KdD MC@J0*شʙkrG aĦbi(PzjO v r2-IZh25@ qX3Eڜ0C EgAGnMR,N5xMpԷ"icAnTDԝQO8!}/T]U+ i\HN,MdUd-jHUwښ@j-7fVꃶ% gP9'm X rKp댈yIpq1lkLnZT\qL89+ǀ8xФ*Xё8J`&rp%pq4ziS:z_®}u0Hx=:b$)!ל |v1z ^^/?Բ >y׼?'m0/+gI_b=iŠd[397ĹW};vgyѐeCb\AB7L<za׷%rYVذf~\+sO+~AmqX>4?_X?qӸz\zEq|tx xpui~-`C5wG0χ#/w;VJ<H%s yOy5cjj!R})+)o%ñ6zlyڌ'oI@޶ .J JhI uI TXb[+TM[|ۃ` \TxɂU `l9E"bUs 8۞A/W#Jhp96G_֯՝y/Ѽ3UѼ6 }ҜLP_P+DVĩUeUcv]X PX/$dWEZk9tRƉ!Yҁ.ٚ1FMum]ӍYB}׎wNzTv|M;l{k7}Uq~j{PSZ*L4P: EY- Іaj~(dc>y^7aG>ZÍnh 3SKmS 6U9:lպR "^U:9e1e?U]9΃ol-'baJ%caHZ7;&#j CKu,i=A~F uiNiTmQb V Q\BDFXpr>GH\=\(6nnxɂḯkg:Y\)ES\D4XA6כr] UE5>)-:B3KLV cԅ"X#s[<κH;\Q(K\۾eQb h<#/B`D" yĢ P&i#p'E"Q!{y>Ц(!RzUmB»yrt9 HFr6Icqӱkq¬]RNTV޻R $ HʘmGpQMȴ`?f:HZo~iF̬0*9ʒJ7buEDDOs~<Ѵ"c3G/[,ƔQV*"*[HkpT, O<8G?&cjMCj:{ ~Z5lݛ2No?!+"RQ S%h ONd҇{r>qRKIY@zx>q5..&(&œPzg}T5Ǥ1A {RcA'F A:۷1BEŪHYSN}9yqu@pE[3J[{7F%#f\qyL-hr{nEvEG}=ZgPpeʚѕ OPo]37&FUػzFu bVֺ2XEvn9[T〱~×(߫֯yd dy?I/Ť|2)+ ]BU_GTMUҠOE/QZ؟8`kSq!xܞw?qR+?R\D0ƪJXJ%".,b7!N9M"tyliOo?u.@-S) [(J I%$PTr4F Mp7g)ѡ= a#/N o!gM=aﭱk$Bns%MUm)' c"k6R ٓA x|Lгz&֐WxR##TQQv-q)#P-XB\' J`dwwi Go?"MhcIC f6ڱ9Ƴh}KN5*_1֛b>  =$Wޞ)=,Wى/fYܗ7㙲B˥sHPKTv%r9T R&A=?idXIcuV\r ] 5UTdMcp`7 .CHjO ~#N^̅yy1j(_w3XT8*iXNp)ك9ϴX> ]:ܹ.f?l?̃Q=󋳃Ot/~}G[N.vVgʚ8_&>ص-XGa1^N K8\fUw &AA& }TgWgf}Go|vs3 l|&U7tә g*k7H,uĈ6bX )aU|~`9hG_͠9ZԊd/w? )X:_ӝ9jomCjaCL+ê5&cwͶB$Pp$ۗ9f>62OD[XͻyF"Ձ'tk>{zɘ6hr;}sS?_9ɿ~R?1L%p[u_Y3ulzTc~ QwՇ%6ifW)300Og_\9 0,c?-^ %r!0c-+>e@1-~~t!|,^b`S2uߖ$glN>h퍉{]|f#7O͋ dzvztnƿWb6ͦsߟr4CbtHDDN&/;)jU^72=ޫ\La!:=eH3vg;ݱT N(B(xvD3&&bdE >9 &z$X9zD%tܩƆj:ą;vڸHþ(3`V;OQ.u `YJ̗V FF ?Yy՝)Աm`Ɔl:Ƭ.qaUd 3Cu䈀T@G!9[IS 4~x&9Ӗ{, $5Adыh<1b2) V٠%KQ"3]eZT.kTpTt_p%f{zݲ]vmF xT;UT(8 BJ(TIm[\n7oV\]`'hiw)tf𺘆A,ar,&ӕzXJ^ ;F9zхȝaǣ>H|a!Y~ }qbQgxZ4 R*$9tއϸjSQAWي%OƣʹQ\bPwc:&GLًN(x)4̊ I]|k5,]tl\xu=5γܦ1<c4MHwM{!Pv[D͠q>$05JAo^m2(bǫLR v{! ppOSwhpnW,sa\S  85iatHp+`8c9'_DIJ'IO,3ad<8zl$4-ѝ\<LqUMR_R!)I\\~JKr𺌫ECqnE. rΔIp6*9ϫ ,quj |/=S,9XG 0Nv]V/ 5Qc,W /PAF.PEXFZI,ypڄ){VPn,=vGSq)s8FHYl1jehD"6 {gs7݊'6N;W1.?}N1x'BELLj9ԫi%iyz׻Ry[ |_{~E,΢ EJ@_°uiͼ5!â~wJwJ]u}6<.-?ڑM{XCH~Ϯƶ1u(IOP0!!<)tvsۜ\.k<[pK h 0 i+R³7W3%yiyD@ xkS8N_MO6Ajl0yNީMs~1ӽGi| ?tS.<),ݩK9-I6}{"0ۉlz #sQR-le&ZEp fceQES' x$ )!=ZID,* MVHԳ'xg^pޥm|n}_etJaDIeq0 Z֎PG LJ%`{#32x5sf"(5cg䬑 VLv;хz w]$dҙ:ǺN} J3./c"i,D IcTxd*' )5#"q`Ԑn,U42dOA6N0+D{C*7Ӂi]Iձ5 +Z3 =ouqiNj 6cq`55eL*«fP ێ iUq@hk<@*AHRqKd#Fut;#~}SXR;M5"X#FkĻ80 e9-M2Q$(Y2BRH{Tn F Ɓ3uS/IXobAh%(&LR#1s)c9k/>8-k%_gg\^E׋{xgRm9zI{! hiz #`(Ƽ8I{ 35jM!Xn6/ xXUUFմmw8}#[Qh5a练$–YO+%Ė޵-q#Wa*#|Pkks؍k'W\H9k}!EI$EDɓmi3ӃA7n4\ <7*"R[="d-E[IPn3_N4﫲IDi ZBLdh9(D4Vl,Y:W3B@9v+gHFΣxKeL(,KSSD4gD'd$0M(-MயKyRYjHC ġb6fh"-=R&̕2Z-UCO ݥx$Dd@>Ñ r 4=*g$5LYBq GjDN~׹XļB3aFL(#'&zVL ljlxz_84E \P~P8ԅT)^.8H<: i T2C~jf 2TgS@St, J*EJ4ŹFöP ^'~#nLQt8]4`'(%!4RTOM |%8A8!2!tLx*]J *LL>Y܈,-^`Nzx*9G(g>V}Nu3%H^1-^}"8m?82hL h%R79Cna+xx6^n';}` Vjb߿n;iَUdNJŮ|8F%S$3YtRx2tj™Yz}عLsQχ NW EjכocoB_"{4{nW/mG Eŝ%2@# A2dsi1, 8Y2o42 >AELh79hC_YŜ`љL> D7а1blH=.T8yYOJ`x SL?z]i0x>[g8?FZGyƯM 'ĩݳ)Pk-"FBhfqC{tBP1ZE "qFp8C9%gJ:z/Y,9A]` C1q6X~[ 5+siI/FYڃ| o}Dxj+ާQ͸%G/]ܜB9nL&=hX/M LEIu<1<ZӳYQfIA2@,+ $`q)ZSdB/U--󏀠z{ X5L="(\ߊQe G|$-2" [::** e-Vp`.J{v]0=,JMG9Uox@'sFH1m8)$*]I G٤$hll,N6ei! fty8>avhÇR~}YvSͮ_7 TVWqKC7oWǣF/G?~KO f]|FOƗy'·wux"4w]aA_&8: C8 O/Jywhn~ԏBFooq: zSxξntt=}oi= <(zƟ߾ߒCF8tS#+_5} q]~N=l+M}]M,=b3st`לG}!{A,*ib &m' oDqU^ WI~Gx`҃:t[ ?0ɽt׍;YVN\}NiK݅N[^|'&ݜ(qn>)u(u'Ne74ogc~\&I_ɏ7()Q&{f,VgkZ*jcTtɎEh455B;yt6·݋, 4 w,RSvhVw;Jpom`3mti [{ -#%/mtѥf3g*$}Qwg3μ{CH&a0ф)Dec !Ċ@oe=jb//^/7mO ;3Z Oid4 9-3RH $.U1zyt\}g@ Y%8Ҥy8GZA\(eމ|i'4ц⻹_7%\0(%qWFZ[* |%ЅvmGiv/8x`ICXi8ĐѠkmЩ8H4 EŨdb^P!8-"R"Չ%,wVzKFpbS9w  51g7/cE{fLwܺ7ujWBm񖥫]BwJr6B:\X](3մuV5;v֝Xy0 ovfo0;n . /.МLM{b=cOC_Z|sFgfFXt<nb6Qļd%8Uɺn}]7ќ>~]{-W gHaAfGz8O2P>9 iE,KaЮst22eskJc{.:E `Re@p+Dp|)xPmpJ=AwtY&.:@ۺ0Ozk1-(.KhJh"!T":^yi)Z Cġxjj>1yȃZʯWaG) MRnbp^(d6 oh;V$w m0J}ZtY8߈/Mƽj|؈|NHm*ppR^H H^k;-l.,v*I*omK]t}钋M/76hk|7OHX 3#hx=3\%+˓^ hDbJ8+!$0־xApvP5だ9CQ?'<QJV>;᧚:W&Tg ҞK֩@2"4 g vϕ|gً%Ւ"` Q 68et;m]]z Kơ.[fiZ mkd|Tۄ.<ԻGzX%0 ]9 k$ @J똏<9̠Z'hjpT&o<h)bu g!>RL`&EGfL\pC bܭKq 㼹$H:lנHZSL7:+Z"9= pXR1aNT*"ʨ?%Hhj!Ufjk2ho S* (uq4SbSU_$-tuBS~xVNnXÎ9zѣm#zఞ"ࣅO1P 6`^kym0EZ@ZR`^kym0 6`^ 1ALMh7; nvh#wfvC١TA*#{׶H\E60%edFb00;cNcE]L{F;xiJ-JjNQK$,"N*woy3M\fݛ.z{DZ,wz=|~am{z=<ɖ׻N^R2tvwĥvFiGS7xycx϶Gv1qPuR,Q%(šG4zyQH0hܽx݋hֻ x_l -.w͇֞1>YF]L!tv!ub̀3A߯;ۖ( 9NG15x9ڃAR& (NzШ4^Yg#ʾq#:W|zgObde}xO.#;z%vԘ;ebv=9mcRR%!4mʅ;%Bt B0ؑUm5exFXo~rh;6撬w.('1E`C`aZb5%`Tଉ1xS˘C;Ldnbפ;slLJſ-jxwd./&{ ?RWzA |4^X$~3iM\JoxEJ0~i16r^~]* HN~rŚuYrx}ȡd;J;YcN3P$'7z{j9?ٟ*{X";e~+F♆٤J.Sj R0#))B@ձA2%z7y۾d~KB@۷=]tr;6'_fPXVYK.' \ZuLf!5dႯT"6\=MR#钸Z1y.ISLqp0 (0XW,\K~Cc'8Z<.M&׿NguI6"xC5\! (fTb]efQf`4IeM024g/ 6!9,^vJa > ݳr(qFl*.桠`q>߈#ݢº}bI'ZJ1G'گtHI,(䍅,pu>e ڬ0C@a/XSR9bN`T[T}J`K>o}W]K!gX`܍ӡmc)uʸ".H,/maMP9mlᤔ]@WiB@cI r#eJV#ГXPPFtj2嫓볓-YtQ c -Y҈@[Ǥt =dL;wOQwfAU`LXCDc!9'1v6 B/ā)/3m6?@E^V:yY|"ːci5y59 R8*?8jL֢T4 vdy:1U蚭&G61B՛Xѐ5]b2"Vk[Us 749(Q&KuQɟ5djݖ$}vI42,X܁UGpwG=kaYwG3nv~vՂ<&M5:O̯M,]9zŽ㋯;Uo3"Q*G|۷R_i݁p0cz<ǹ}K> O`OJrsǽڏ}Z?1jg瓲h}j}9}G+٦ʞ>ɬ3޵Te_ןATr=_Śƫg N[7tC'].Ӎl: 4{ߪiD&_S gGt#w &wˆߕ2:Ҿ~k0֥._zeszm&x3_qeծ\/l {djo/:M)Jt xTjMJ1i "*LGD "4]^!Q#0|+g=!Fa EBi QQTd}RQX>Hal)-L|~!9ݖ ;fYͿE"9-4`ĭ7dbLb1Vy" B$j֎R1c#m2# <2BVkCW9J I@*H%gR(.k8դ"I(uDLޟm.䕑G@ļ)Pb`_J>#h?f-e8j6x`b'Ls$T}h%1$lD|.>4NԔ]OM.UnXR*5S`K&-h)e_-G>VgM"?\@+OemJ"̎4g'We]=z(VO;z}Ig/{RpNj]|]U}wtiО?wQq˛Gj]m'ˇ,G?^_zO|)"yRڜ?fYR~6i/?{Ƒ@Oc8Yq<F_%ɐl'~gHTSJĶtWUW]j:fFRYUK8][ h2~G5`AlUtzߦA^ԏ_|P~{Ľ(>LF hNO/k~#fjx|./U}X<˗%íMf1"컿^ Ora&g϶ɩbBS=rs+a<̔0NP0.0UEcVIT G#b~nyeiMdۡL73%Ebx4p_[wHA~ɨ"@ݧabKM(JbE >޹aҡB7 G78FB!1S/Ú#jy;U"ie:0\0PҠmPV# $02#$ 8Pc.rIv(RޣTtVƖM7\iIAs(zKep*4E#ab좰TD4U%7v.$X䑂MØg6Y(V~Z^O-9R]`hi)1H Lyߗ_rOd^3r7_]";IEОhIUNx7' TYH,Ġ'nU 36gr< tiQS3/:wJ>'փ;{x+y@k2MH4\76S]T`b#=22}-vYi֍۷=y,|) |&cmZ&# &>@J4u*&H7T>Ӑ'ᓟSq3\&_GpOڡ~?~!ҹ+r~?{{?j'w ^T2$NPNN`N-)XDﬧ.0@X%6 "@XKr-cB)|9&x92J#XqGkZ јe#g;?`B?5@9확!V kClxNu*;yʂ -`(3*hbQ9b0Xʔ:YBzr<fq]hsu\>Kn^\)*PW%Y^ k3 G*CMF&%*4VqCPQw9!G!;#Q ғT"Oa-}Gtwg=v׍vz 9'Ca45-MX̀ՕDG 6dhbʨx,!zF]7TlxRI=k{.Q|8o|,~x>UysZʄ2a^!X&HSE57V"FMsFV;|>^I;AJ[!bl2h{HF))0,Ѥ&r8C`D%}3L1$)\ ;EղS3ve93F_%LT+: ~.Fðe7hLa`LW_.yQL?][On4De/ۻwsReLnmm? ֲ"gśP 5a⧜"Z~}󢘌fP KK OJ2ÚFB#`I \nHԊΗ&HTJ nruL Zr% ?ujF9Q]1-89+ Jz4*ٱ+uudWWP] DG`*uu%GDNvߺJT+LA_-~0:KNPyH16ibKX|"_/IG~:][eCd?`(ʗYZ2yƥV;Y0-֦50yE^םeT/+/^,_K?}_>Sx78)ZTO Fє9<ETMJ+Y4ڀkټx:5g<)_ϣx1 ]zs_]^6ɯ\iq ||rXSQ`YW밠8봡KB@㣃i;ky$qk JGXaޓYPM[f߳e4NnF61)KU73=O>+S X4-08_}l%>+:ny?'l'ܟ(r`U_9 T7@׼JP뜯,G,w9퀭14E9ĬW逰  )yC|9 je vlkq+r'(WpB4 ,/^V07ڒ-J hL͕L0֧>;j%#-:^*{VVoem6]ot2Ȍf[Zdi ~#h!`YU0 `JPI0D0*&(S Ikw6`IGAc0S8 #BkF(F al.Ϸ{hREu罡شL]7ᓞL =@qo`Ҽk|&#JV,wng5aȃ՛T1Y▱$2X<e,!]Znu '1be݂Bc.cO/+;$*8ʝ7\ 3f4j|sW3qKJ{w^w]{ I79\Q!|ח)y zyzW<ʹ;8Dw^)Zi=l(lP:" a:Z ++Q(x̂Y50$5AZX,@vGEgO> ̂t<roDellp`T9&ۦC>:VOn *OVr\x:._׷owTujr #(HOpNr`'Ƥ)#ʪh-JtB))&zK2)}Ԗx93Tn͘1MfgZ{X_ؑ_E&{n[^!i:(HLLsjB%Jfb,L3>+~Ռt3t_eu~7Gls{0R_OFߕ!ah<@%vZ.eb/5lu@ CĞ$$3-JɷR)!N7pGl[PP{28oFnxe].p!(!jB5QZ9ԻsV&RV(,2)6C.9e@ ֠X#J^bYSTTאx8pQ_~ũǾ#"̈8#C:Y"=)RNrdmMWdLwRDLTWG(l8%RUҖTz>Ĉ8pGjV\\7uHɬd_\ĉqθ8Cω*sP66HjIѱ %K$Y-CI3.>. Nf!M>mrU9hwŝ[7'kR!L_tK]֛ zEJVHe(-V$BCNdk@LqED <'B>G"&BZ"^W!3$BRUZ*NQ 0F1r1P6EiVcd,NP|\Se:榔6!Y&o"K=G/&vbf=pvL\p./Y':[n_篯'{s>k届>AMVjhķ,6%y3ENGI*winR' JH# &E豷.VU0GU ږ rdy܉7㲧Iްwt[~=T;w?q՘[ƶE~YFẛn+iWהY{`VՀ_מ>UmZȠ8q@f。:9!vy-g3igS["{ Kmȋ&*!o3!8%^^5R\ !#lkLrR㹠JZZ#HA/!Ts̑]RKA'Oog6r[cf;%-8Ă[;Dj@zRJl C5`9l]؈;Ӷnȫ)0)6jV(D1D#jVG]@qUxV J wl9g+^Yw1"xD^kN:A!P*RZ!X˞aE_{yjziGBWb51&a %x^ؔ%UL Rj$euHF3;ҴORL{SY Y֥ը74{9sa[d'3mK$bDrr ƱQ9v=O/=VtgmXA&ϻE]|Z]^Gq3/hP@h &|!Gꗱ:ؗug`ݼҿ-_c1W69K6_.Ǹ֥Gtu֛{=&fGxz&aۣ=٣/XѪGnmmߨ*oB~z̳|D}luAi>;Zyo=/jYm~饰t 7.z{HzdoHUW]8/Ͼ/W:͝׳4|?zS3)vr1rȃ ńX gSDX]fo77oYԍhOE_qׯn7Zlm}n /~M&QM:{5uW]tzeonMU|&۫.|T1iԢz%t64120VAkr3󿇨Jc6Ba^9.`Kс1͂)ن7$u`VLe{VB ʾQ>)1s=0N>A@y ~ӰN?UVana*mgkC+eЁg@p7Xh MD<rR7AiM 񤤯$.0X%eY>$E45Fd;+*&&} gx7|9?gRoRaR뷩(!6NiMc5TIRBpHe̞r887X0ϵ6pd'ъ3`]2rOP3T]\ \zL-G%blQiܒ ۯ1bj Л4Wh9c0P ՊSSU>v@>/2gmw ö6QAs8³^C7y|̗ܻUAXȮ!"1Qԩ U>a>zR,g6}'IEym4bkr0)F=ms=f!L!@V+xPiKr A52FTyYOOSGp_턴G?q%/tM1[ힷO3;zvM0`.:`EGKg-9]uΥO9ZCfz1#d 6\N CB0* gs]?\ֆ= ;gmliN`?"9˽|4&UW>HΠjOV z{ ZgN&1Ie;b> -S7}l8;4tU =0$LiӑoQBO7u&Ny*j7=.P19ɵ*. =;&Ո9KY=zB =FC J&oZS G*ѕ#Tk2UVgNu f\=e!a T㽓&0l*lȡ)tHLE)1 pBlYfLţcP{ jf".\85ꔜ);4fo."c%4zCqLМoJ4b[@S Ř@U !a߻ U6PR%r3V((>=s dwA?crq] m{Qd7ֽG"#Is넄97rP_ɓMFչOŗޔĚ|NrbLNtT3Y]'qVMhTnNd=,ݶ=BB8m7EK0A\MbfJ|Փ/~GM@ZQ.Ɓ!CuTs33i}&ҚC&w[PlNj8xÛiFzVoZ/v!*FLOҥU9l*qu^(RO|&>FL!ks4'Cbi?ƾS~!Rja~w8Y/A޾H~/S $UlAls0KH+Kk,E%,`6$DrM3"< "t[>M~`w{z))[e A0sIM]JOMIԚ|0nڂ/"j3bM/fnhuw٪1_Ye\0pv#~gKb/o駥2~6ĺ6bnno~:|.> TGJk5E|T\U]U2Nߏ1X_1)#bl1g\ =[gF y|ы6߫Qdl8b]**f_,f µ3 tYw.́䕑X"E.X VZ|[T$Jjr Йh8_S->+8t2ؠMN 'KSxw_ų/ր +ل$Uj\KIiNji hP$gvk zD́z q.90$50RAedxsۍcuŻ=ks۶/ m< @36ͽlv;s۬/ȒJIILDzYl)gl$!*Edѓ}_yt}7^JsQtp3f2s4:չM/W!)ɋ߼Y,GůJ=O10ջ˯_M^a짏82St{;,Lz~mp 0xm8@q0 1o]W1 j2!?|LI7Nz&oB%PMv._?ho|흅{\ ;Moó/,ѾvԹi4_ СO<s9;gbQ`_U{΅.6D]}hb<7φR! 3LeRɮ=SBD}zUg~ӕl|'m' D'J(>,ik,xipIEG\Q-N?_n7 [j܀^7c`eSX.Tfw$d-y:!yۿ~.=h D-Oۇ[ ^?L.y,eYuJ3L ^e09!on|;).&{.KnEFnxFE: CM@C)ԤL,U A8 I `w֣QwԱ·(%6mK a'c]nxaW"B`*S)JhFǓ$lM-T^u7E5N~սn|F06[]~yʸ$X8J=h-")S\穲%QP& go3k+Y;2IzՁwbLDJHm|Ho*P|PK#W}. +ۤF{[0]jیnYQeMo Z;4yfZ#辯ŦA"6y ,Pj=zOKP ڸocZF%Q"X{㊁j+x5`޸<KKm_zT+WkL\ff0b|YVNa1/K30P/vP>FTa g32l%\N^MrX.\*;/p%& \|ɔw5ďC\,C $8KEFz ST:vҫBL ]4^GЪm]ny4r{S.EPMI^S7ꍪ&xO$ yVG1!Ɂ%v<lL>&,\$_g*C5MzpH{O\ .JMuj r)F?q-}$oAay{{x{ӓ[ tjзTqi77~,] Af@XxPG.k>j`"H'mqs& !)3ZJ,-}cc3lGݸؓǂI}abb BzA45DO{wC}?tɤLM~/U?T{\=_X8&DY{bA8E׏aHy8dU< .O?$;BJ0W:SxX DŽѓEyok@>kѣcg+kݬ񓤓|Z㆘vW@zGvW $*u6] }k 4r@n(cNG;7LF\H~\G@ZI^35N=fy,RF9XF2mv\7FOK.Jn-*rﲫdHӫ.6lu}O{5?zbݶƆ{]lR)]2w8ֵKtƎm2s]}Շ#׵3w#Kڥqzrck '̥KYj${E1(8L9X,^ذMrÖ eb_p)&5Xd6$kFIagh OL &=q0N20%˧|v|*LYFڈ7qRͷd#"8GVbjTvT,wU4a cp?~ūe{ЈZd(q @dIlE,k*no['Պ,8S4JWw* K`Z}nWss0'[hFڞnB^Cxıbo3 P*|W7Bmdmtᔁ~HPr(a|7e>/bs˜^sM8μ}"y>|p)F=J |"31{<y6۞tߙS!> B\җnDAětX3~'/K4U0\u^:b`)>m 'i2z*ƥgx+smfe%]nc8dE(UN􉹃 cM7Ҹv@S*<"&*z<I f58r| MwF\34y;]ri_uPfWXSa%w8#Ӥ˦VS=SLe;+`7[ϛR1 p9:2ݷUow2l:/L3A:"rKŽ\pDh>trD@rG1GCD 2y*@M=/\eɆ57`1d]{ u1 h 7#g!*i ..C1ANqSL"CSfrJp3,M=+/.9 =gI2G`Fg,ӌgI#س3F9(ʜwfdʹ "#Z Ũʹ|4UYيíyG? o}S?_A'*WΖTwnў/W ̓owsqwșm|!v@=&qH, >x%@KաJOb(TۯѠ)YQ\02Y,Y%oy4(Zξ*l+<ʶ *f[aduyFEgZibdbu|Zn^9&rmށ|TpMa-A6'%"mm {B(!%' V|>tBw(8fl0X]к|W8e]-o)Lve-9wB7JouBwpUޥJ\} ^yC&r =6ZS)1'Rasj|ppr:l.3y4.BB5gPЕ0ټWT׾lCv$'A .p,~#Ī&Et˶ܛ7w~s[ݯKϕ[g"XQk(4 k2>DG">Hu;`g9j'9곐"ڟI:IYkhwf?.v2LmI̋4(./\f߂}5njE枱d,)SO:!3ɽS]ǭ]moG+ d7n/{,Md70EHʎ.!Eiz$Q68Þɓ( DgrX JY ʉT G9,Il#l6]i(- rĚ5әx.Ec@}CRC u5GO:C^dBR'R=e~*x3u)A:OPR̪:9op#lx--Uc B6\L@#4)@)GV/qԁ%(-菸x(xؙv싇1~x [j65"{"! L֊Gavi%3[wy%%᱆ S2)\PpL|DHۗ!.y^nVkR0, cv~٫m %BIRdF /GF6fݩK 0Ld%#)/kfCe7c 1J]oЙ8ۍxź&du_1W)`S~퓶,_hf/t !n(:-Gy|W'sxfsTIc#R 0M6>I BKlSs9aܵ|Mr6ё3Ԇ^e84&4[j&^}bڵ$V P⁝rRaBYI9KDqIYioy I&"u=eә8;:+4=RK\RU{)UBؐp|}xLKqzS>sxI]rEAgN dцE(dA X1Z$R:oh z@m97t?X9;w낀YJvRTde:eRN"aqu2:EdbCtfWx}= Ŋ{GyQdƺKl"cbRZA'RG]qhziC SQ$bc;1`ͭnm*f2n^&@Ŷimm+`aeYrT5r񇶢U.y{)x[Th6玞~[gt氆<N٦NշfC v5A {i@1IQR -#wgViIɎI_gPo^MA:˯}FG˯4SadW>2!:eQ z],%|dY"Rw8)RRl":#5ք ;RbI: n8*gkP,H[tbIMH0N9[W9QIZ ^wG{J)c~aETjCQ R Ш\ V"0ʬf9>4ǺX꘹ /01mAp3-4Fӹ 3 E7.zP-ܫ4fZTU?S' o~.wh2v>OV=>G*yaNI;HNF b7D,*"m~Zc3fng<̲XhK+>P[!Q ^!s|ЮDeDXGxtsD/ yL$*&G0REH!`\2Qi(@( S<]SdV?9t gF.ymlD]ۅ+.1g(X[Kۣӝd*}kݕtA*&'=A. X/uՀ'o4q|_8 }SN}B1`(:ej)DYȬҊ`c{bs>SkRg"ۦ~ۥGnՋu0\^Ro{dh mGa"`eQJ!_|0~9:Jߍ/jjȆئ7߾N'`rm֍y? Aue+.5Iys+ O-\x:C&)%gyOy-]_W!'ʼn5 'mH=4i5a4C0h=w'ɛS*MUV7Sl64rM~0Qz/!oy9?OOtfohd=8+|# ēI>zn?k~-w*o^C࿭u皧.a_ݠ!U I@)'t{57OǗQ<&iˆKi|LPk%<eoy׵NOM}3oKGV`Zc(lQlT5N2 ({M4rX3.ĬR[lF{/ty?[ͦο쒄׍yZS}5O%ٻ6r$W|MOH_af`p{i HJvf2WllٱdEvAGj*6OUK)#rR?t7q$[ֳ]*՞-7fPÁI0- z{ga{9p+]1Ӡ  Ly g{:^Mch7f%ಭxu[hyhYQh᝙Oz^P_G"QX@"E{"-0ty!g>esIhu"N< X,I}{QO5_xgezf{tFie2-]ίH=}b< pϯoGt5/Nˡ&jT6V(em;NZo3ѕiP#B*wEh 22Fb)EʒE!0@d1Ioi[YEW{y8eNdږh1d3F/Wʩd#訃u 5кf+rS1Ӥ/G)od=G?*7P*٠w(S5ڟBN&qd^gRb#R>6ѕ8` /?uk9肑2K[I,S60F 6L4Ʒ٥nT/NWi kYZQ^Bwҳ;b'Al n;@ `zzLx*}pwO?[b4rPXtEPR(bDF=|+kԜpEB daٓb,Q6h V䤈(D&5/}LPMk{JRUG6W>|n>a8H,)-3Eg[%Eϻ:ϡ>G N(l1v&;oAv8V;7~yB/Zh܉bf|;7Odhf;ZEEyO͇V 9tS bCT(oID'!Vy,MSOF@ާON?=Q ``5SΕNu]"=hϦ)Ba dtFLUgwQP0F&P wiBA$)BT HՀI+MѹH9 E1[!K$Po[nCg@|@UG>״"SC7;Iԅˋ1;n>#ՎJM6HZ6<(DBd&1>2 Y A'/MYd/dȠfoR@!1X0),9Hjv[Ef/wo@ggeSlյg+_StЖ 6EoTn+&, RQX_%gS+s)[7J/Ga /^}R]R:7 MP֐u(G G#rtQ8 b#UK:j+D @9E9t% Ҁ0.琳5F&$gZBbR,H d(lB&Zɫ܌]K-8N/ώ6C0|Tbkbn\a{:0.J-p3p >E?-~r5>BgA;5mr%>+Hq(|7CF+4YdtBW|HI'dVwo VXIϻFWwbŇio, fEivi5V\{A}ω1@!-ERQnoj^,[j݄%<|X`ۘ|Lcmt%,&*^EeT7!:!hHO߭I9;o*k\?,d´4a& +0=y4W,xh#ʧ:j!J%̸!WDC)b߭QH]Iǯ٨+&QWL4i'uՕe;~gpJ+ܜ)3MUxwǣs~58W7w@+|??~|[~]7kptFYlƾ4vF7̛޴YBaycٿ 3l~vJ]l3Ui{Vx} 70ܳ L.cZvA9b /73RWLzJ.碮*V]]U* TWZ/uъol?O@N.˷>)!LczӅ=hJ;ס\_#Sܷz*.cOO2{ӾzuՎAQ&ZvZyж6ltȴ~cL35(mxjC^<h  IvH,EbHY(f,&鍀S4SfM#S膜 z,2%!Kr2 LX3BQG)SR) 2InvF %F=atk0HB0NjST䠳Ș\Jm!Q8z!D]FPE+0;ڔT1١1Y3rvnr7 f=ƨ|1diI , e!)[rPVB_)!h`@Bh"[AQo^[NETTl2ybU[]Nhq MB~Z͠V8N]{ed# %&HZC:(2\pZH!`Q8(E<ؘcr'Ga5x"^sWd 0.>ܣ<_Üa%~1,Zb;DZK)gR BڌiF!t|U(9I[;v9]5c aIw1&!APclIT2a,B.ry$ ]ԊCX:Kt^.JU<\z77 @&dmɢ`ɢX=*ؤ,ϯI;p $M4cWtbQNnKSv_*-ik 3z0A\1QIBrh4|M/AyA(Fe$iޗe#_# 1UhJoȵ+Tc|یJ|, Һ`zzny.oG{TkΗU7\u|fΝ>e|07̲:9eUcrt%RkFFS\Kh^P ^Okš_)Gd~a M9u "t:a 6Ρ~_m쯮:p~v|ϰn3\'!2hg!il?[m-ކ ""VBj::s(s0(nhN3 X0?_m3<{E8xss1zъAqSv\"% )A(:d?ihcqoKQ qf٘U%QyfuJ5gNT1H D].$)/tPr/Y-CRC3rvx)Nn y3TO FQgtw]moG+2R0llGU"D4W=3HCRH`itWU?U]UxfWc ?·dc+ -" ՎGLMQqPBu*:"GHJPhxQx6 i+$&ț Ł+Y4tjji%) %{dɡg?etB v&S ٬ׯuLlr*ș0/d8B>مw_´W=u/2NNs6a"f\яƥiZᤔڜ-rm|#'3aa ]=7qH;/Oo,K{"MrYuCJˠe'TJ"#-J:УI:)^K^T7F^4boI~r_AY.7CD=%Da|ÞDh6~C^3{.Apuv),<ecbj|6"Y(Mʫ;l0SSl1ќjDm9wb*Ii)G/.8AoӚ26ȽvKݜ\xd7^oCP& ~/o6Z }y73xXk>>e4 Pj7{/wtm~[勊MTQvsAswD;N%gE" L}:T|KtS5wz'(jUHpJ=rȴ {r1j]:D碧 i0f@;őE=_ _96̈́W3 OHFxP0ᝓj7?C9) \ZQz]ȏgkq` 儭6jTCE\xRu) TKRWiSp%g{ׇWr?B~7PH&wIry̍UE4[]~>j(jQG@ Gͥ',N)y(R(uP R*0)<9ECV`B<BBaj3jXHZFL&Z܊tKȶ5rnt1 xoBgLnՐocTಘZ`hW^żog2z޿xZXn*".QMB}%W82 mߔSjN~α]v~m6v;=/hKwj߃Gø_JWwtM ݻ[*x/XsV o6jߚ-Z;$ }rl' =m>yZŋ -J #̫=13rzxSiɲX1Y΄y"aIH&I9x58mJ-`+$T΂`OAs4 A!ܽ߬6#VDyck2*=n`]+N֩H ׁ/1 wcĎNPc^Çn:Y(5`׏ r'7X [B\ ̰,D qBtLc9FD.:mExkd mJ!\rņѢوbW oȾJQ` \4qOKl0-X5^rÜ9əA6ƒEmCΪU˪mGj@\3bZu9@*JkX"8` -0cB)'G,bSyY{SN{fwyyε!6h<' :A*NcD@6ʌ ;@؀mT ҁGL9%x#zrKbCg+__w^$٦β|"RI²`W300/h :FdWװʸWUׯ•Yåa'7"'T{6 "W ދZR;/[(䢤L d$gBB~Wi,U&۫oj .qp!- m0n{$nj5\A^ 7p,n{եۯ0۬Wy0On6.owɰoy}be,ˠZ&RBIܘ݌f):$|k~=@7CX6g]wF. &cxFcgmI )yZr;AfxϮ ~ ;'T,Pޖ"GQ /Ҋ3*Cz ^ wE%X@ ؐwNh*Sz j-\2ܖO/(S} l ܇I~sui꫹ux5}6ԫKo*gwKB/? Ӡw rPKd5Q;vѕ`Kig>(T<kGsLR^Ho=_X{u(CY;Qv@r3]ˋmipHYŝC1 T' a`X.*&ȍ) oGRuZ]ֱFj ƝJ|TZ=& 3B8p9"Tfb_gEJ,?h|ECS<N3 dKjb2mFg #3T0^{"#v [1׵|Zz -Ј19P%0X"cCLEAQޣ yԥ T?W2]`ecEjwKbG,cI &>hx"cN d SV惎^;sT̻R9YH5{ܪ~(vKg>)Cuu.IZ\o^EDgH3DEnX1>6۩@qo#D߃Zqw-. S N(&<MhIty*/6&(\ #it$\ؓ؄ A2 ǨU1kQRk  XFI[#gOrMh[+ٕKDI/)p) h崱tkwqrӆiY-;Eg^%S  [n c@L3sN2w f2dR _u0v֖yZ5a)T$(fEwZI  Ayn(< {Cq +{ja݇1v\a!#`G91 i,i *AX&)dipݱq3tXK>D*xiS 'I4\:X(PrP Rm*y89!ܣDR`B,q> GƨG1e%p)z1h³0*g'w"q"7Y1/u$}zh}f<Ui,^taGXKh5o P D5+Mf> 7ez\JÀ(B{/3|5~4xX|<*^%e b>oVzP_u$glrSbbo]FIc{0gm&C>L%Z 0W !CVDŽ:uKn |kX.I=- Y%r+109c.2kdN5ޙ%Ӕ-⾃%]s2l|ߵE=2ϥRzxϘWHA08wB^qeyZ!]oL@b; `> 1bR<fWs$.ڽR!wkegf:vA&j$/^rC2N䈀+ RB)dG\J,s 12 U,AMM`EM N&癛D3ʆ3(X𡟤/9ɞdn$*ʶ%Qw.݈)J,MʱndTNVo{ \N F$NbGhstN4"TrjVr;DXW;n>dvr]d㛛s<M\~ZeYس!LUp_=%;qrw'#R*?+`UqYC5C%ZFyT-@hlM԰INx9U[&s5=%1rs(׊L@&X U 55'L*p28_f9UYUvy{ 6_\->c]<`~/ow9uR DJOjx V3J+ Œ"&i}YME - Y+AMC.F4gW%SU}L5SWJĎWy'Ĝ 'YP{j/˺c"Ԛ-bjJZnR_y+JՒ )z 2ڬxE'S&KEh$qKBf$..y8y  (50΅ǡD "ŋOccf\Aࣕ/@E&6&Vˆ8N)T qh+90١31P!VOI"i8E: ZdVr(\ .arFֈJfU\ʥRDR6hN^b\˨ m@I#I'g\⅋٤:!N¶#[d9Yf| O\oXm|&ev?u:Zt@zd-hj枵$d-8pO2t&׸^pWM%W 3޾SXw|\L䎌qr=W:q*+;Wvա]J^W97w\\ D-;D%Y^puҠz•nU}7jr睊%3w\5-:E\/W~vr},%^F7tlJ__ypu}w)8Uo7%$DL8\7]>:h_i +36cT/HVZa懳_onM23hoxcAwnXuəv'ˊz~H$Dr'ɡVv^F vzMf㪩ܙb\6MGj`T઩4w\5qE:8J[g+Xq mR\5lVqTq iK7V~&uv%j_qTeq`~֮\zUSkqTŻ~pEzz]td\:r:qjufJwE#pE z@kv+u&׫^p%jsUSix J;oSae=OwC~:xu -۸_9ݯiPٶ$zP9$C,ʙO t36l{?E8۳e|lT+zNU60-`U$ E-cQ\0}:FK#\5eW+;JXVeNWֲA`UxW;J NWDw+5n\WSkgҩW'+G螼&C7USqTd:\pu:miEu+O^A?KM1઩%5w\5;3ւ;uO8`gJ2t=Vq*iy2=; L^'95HKƩ3{2F-:9 D cqTYpuv3μ@˸v9=-v{ѠUr43c:Yg^eTj@ˈهw9[^;=A3^):D0&kr]7*d>NvK$wAO O"#v+;N㪩v H{)пTGߒ>JѳTN~,W"~o\م"WM ,:\iؗ)4Jm᪩=Vq*\$qV D. ͳ_jKMpŊR xWM.u 6W]}?zNB:F1IW=Qj:R(3;AG/:e Rlnpi᪩%?w\5/:A\v6' & \׍w%jXǩDqew_Ϧ橓U ,.]OjA6;Mw=Sstmfmh}?nF{C8mh*圢ۀz;Um7jr]eZT²|btz[k[7"|ܾ=VgimoX:2LV^}jQ%ky 7!f_?ؓ5_}c͐{fm/X2R>[,*SG`<2n߱cʶ/>,gtEYƅ?1_ܖ=p#@8jܛs[}ΔOt:R+6?c֒?ߊpW\~: ;59hK#W1M@!Hd+ w˧$-uJmp~'ےM͗A!u};_\ kCX|NtS5:;o#E AY`Y+l[;u"/FeBHƨmjBYlt5raNe /}~ 70P]*A(> cq(lС.Œ9W-=Ҕj])P핏,dbN 4#`jPc`FrQ^rka68֐+qR֦!WPd&'ePTe1)eC.ϰ$d2bcVޘt9G *k!(rQ $i]&~-~h)|CjRaZjvJbɔDZlЉb!0`rіhىYe+VIdR2UvC{ZUEEgZ8_AKvk=Tr%۩҇b+6r)3- js.@dP8"1]eo{nO'{XMI1!Ku+Xud]!(dGhOM6deGަk4#H2](_ 6XS " ^TTlPtA[wZ 4@shRsmW*A5V-Tbպ* %,d!Zn kBhcmu+yX,ĕd Hqah VCFzzxK05n0}W#)ٰ֞;b UhU%`}5 eo LI ,)1N-` t[ ҀRS̨l`WP&4$ {F1G!HTP&׀T߄L 𷒡2UWHPcYTB2 V^X kzeY r3Jր1ukP VCH(PED&TDE"3|5gAx[b͂Gܨ Ä#Bl coSL3H ̚` U֢)9nL:v_ bUPPb8)te,7H( ;:I 85Wڮ+a+uY 1Ugd0[Qзz $$dAҭ{jL YV+"1۽@VQ=r+ZQCk>84iA=aPir f3RTiǬ䤡bLbb󢐴r8!bEw!v> f0P./|m!:sܨ]-XX^Vu mz`-$L>%@uP< *}t+&{H\!iZTUFB1L: !'`GeF:#.3(Z R|$ LjyU*C Y0vci(^Bd"COXVXktdDPHc ѣ.O9}C^Z@mD%|uj З9tLIճDTePwEx(%G6CܖSBEk .IҎa<P9@1^!B !U3ڄ`1ctGJ߳yBP΀H6e#kvqA{ol:LMF"h4$?<(Bjw7qVUp*yXT**BȲ$P>flq)ت(jS,ԚϽ_tQGH$Q5$YI$e(m@VӥKoU4^"[BZF7j QH6 W|ZR0]UK mry;Xy}yrq:_t6.^.mù*d)BU ԭwH7ۭIFOB=V`&]ۿ;BAIlf1mT]kM!JKBq$RVO ]j31iGrr0a#[*3bO*Vh% xKd]ж+9]PnDV]qnQD{=D+Y,w]zʠUbF =>A( =`-qÖ VǮ7;"($'Mm2X4knk!gEu7(V1Z8 RT XQTbwXv{\:&XW (]IڈJ5(Z54)hc+7 ]HkԬUg(jtͤyfj)څ'е#Dw*UP]xZ*;k*>Y[Tڠ@vQX©rltEiaJ3 _B\]čtrk*娵¨Sp*qK:kW̤Q[1xx@A4D֓ɍ\4*w. b!C1 P "SJ,=vRՒZ Y\ yۨO]gF358F'cAA(uԋB?YܼX,GWglG»2CFD k/=w/CU=d0׀GR=%lQ94vIggt-X<[wڣM? ͗7Ǜ9P76Nf..hɓ'NIh~Bm++\\.ZZ]nu;?on۲OfXOn:\з+ONg㪯lu@癩ZE^-yb?Ua)Mpc?k7PZNCtdzrb'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vu=M@"Pza tN :$j N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@:\'PU&tl:N f2N uj@@@j7ֱ@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; :X r0hf2N kT@օ}w92 tN K N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@:'лG|C6avlNGM׫チ~,.{@ sqڙq pq h{PƥC0.=6S:0.(\V.$v e/ai( 5l~WlmXDE+';$tzHQT ثo5٫Et{i߮ 8sW_k jF-]BiҼ DrpZ6߮_ZEѯ8ZEHBjC~uoOvtg]diC[钂(cVJEg5{_9 XTkB5uZhߵWjb{kԮVڝ z}2Co([d2'VLF\'{9j`5~BtNN=1mNW@% ?8! +'CWAtφN}<ݏNWF<6]ǥ>N(q,@Wc^::\մ.Ҽ/+ %rN]( ,?gy{ez^rsW`GO3,_{KGxі/5r(3ЋYaC٘^Ë{CǾycFt ~"ܗͼ73_.~>} ?]^Wg[\ҳ_yv1h`RtCLtkQ{ltgv̲I&MLq6G,g:lT-#8ө?^YJWȺf?pv{?'nl]j],ZA{b Ɍ/fx90DMT1X=NX}J^}S;p%wY]`&CWj ]Ի+Bʒ৤LLG]CS;N;] LWHW6X-DWO a*tE~ޫ+BAPJ!^ѕNW2]}K+CW ]Zm]dzR;&DW} ]8ڠԑJK'~Cλz;=Mcr|m%tȏHgc{03iN<1IM"ad~w?n`ck&+Ch׎(#C i+CtveW)5"vBҕtpBtEp ]!ܾ̞M䃤+S* y;9]m|;+ nBtʚ Www"F1] ] +JLVpTtE(-vQ. 'CWټ#ZLWHWJDвSp~rWgy3 >dI$O9&qyW)dcZv _ch80>d7.{EC(C+gŭO?;B'\fo~y:How9x Z)Wzx'?a+e֫vߴ*-OM~sNYVõ'XnTL}j^]M_\w%yכf8yڍ5TQtG1z[ݥn2(BSV]](2ZS,n@lezCyCtA+Ey56b⧤We_\Ju]hZ[QWD)}&伖{0zp((h^뚽ٻ6nl ]`! X{G6 Ȓ:#9u{8eGc[MED8gssЍCիNF TzApoJH5yF]yRt38M>$MMtu;wu_ %ׯv륳P✂1ya_[*vM!0aGpW:}3Uoo(O^M\ih4Bb6r.cRY OLXOiHX¥Ofd  gH4R h<'t99)נ. NIt"hAjg$5LYBQ#5B{4aI0L%T!$ &QGN=Yoig]Y–15H1!ABS):cpD:aNkLO\uGgmps&y)O1d0+q)!Ds qVoP+,&C4Ik]e?.f:Բ3eu;ZC3Nx:}bvEQ 8]4`'(%!hiHI\S2 Qr@,lX]t:Ɋ qCZ_Y:Fr>X#c頸u/‡ )ޢN2j,%_Fe[V]8_o*rjwy(`IsXhQ(HbF1R`|" }F<` LM}i}Cl⏘]yb nr aRUHz~r{Jgpf7>e7yr?}CtZX6T|=E#ɊRBXVUQ&>]ш2C.ƋпoQ lbhU3ӑՂcTr:7ԛ]j7)IhjE0*c8j\;( {ug;m@[ fσW1WWgWr#6׌QuY-dm\%''Y`i锷I]"h}$jOS.M^D~(80*^u+V$Vw'M+:٤l`)*,Q@ Ճ02dsi1,  l3<7O1qj<- 193%' gЛ8G<l4E,-^Ո߁57}DXZRӫ3oǪC?w#w1E6DhjNHI( sg\>8~ϬZ\zv|nGhYRx1K:;*0i;0|Ld$ե!1q?g@мt/( cTFB@< $M&>c -v!> ٍabO'ԇVھܗg'\@1!XI62.RiQ!&^κ..z.=bl ^?60O/bG_Eȓx\x- ʩRC1jAcL#~.ƓQղ:I_.35dG7h;~F>G"OM cg\+f{Xso#k|Lv_ߍW}?̛7y]y9?fGnWc=q#iF( ! J\'C174\[7o'2Do::袊?/D(Åq= U(ڛ:H^wGrWAnruxd艢hu'ܔF[FOȓI2!^m({*UgE/7Y F:橣(WY Cd%SjJO۳){5[]`,Aj4\Rip4\z.̾뾎HomBj1i "6t큻hBd`dk87٬QŎލ¨w]9K`.'婤vB7<Ώٚov1;v*W_|3 w0ni=V/x8^B{U\NNje;:Fz"&xDm8QpXmZhv{NsmG =|>mAm>@cX3޶ż3^&rz`=͜xV% 8V Qs ArDR^ۇlL!$X(b@?zi9y>1+E7Z Oid4 !h-3\PH $=PI^/N/k`Ho˓:/7ݸD8m^ OWFo@)o84~ޜSiz}z6ɶX ٴUKoJiJJ#-=!Ljn Ycu{oYW":nv]=q䎘 (CdL)'[a+7C21U|)_=HZCJAR2q F2E8JRx;ݝY/~k/F_tѯ^?hj/߈~GDZ,ߪek Wt{N/vjes}nbwjZP|Eh)ͫ= ~9?+PٲZ rU/_|KM?`WA.]RhMJoe*71%A!G=М\˞P˞D14YDP! '襍M]kS(0YbT21CGAb/( )CDyưYUܳ07qLt|.l+tmɗ^߳ϻZeQU~D+jõS*Q+Wh8flH[V"M1Y },se.pjX1bu=8ɂ>K~[O>)wV7|r$IizيM{P~]{YG~o؛QtQ~9+}1X=p]$LeJqHy&erít^]AC{߹:=~t0dY?DŽ, (Ƃqh>U4[6> $r\Ɉ(8fBKbJ(Lp&N=rlhLS97q qmǁp?!w'fhؓsϔcJ>[|UϾ\LRH}i23W*JƿE+a`lI@MXW&>zPro?]aO B=p p2owF@S N7t3U;NN{dsClz\M˧^*ٻ_^ppR`h9F>%ȶ"E 6]܇c4ג3俧zYcEn[q,U*YRQٳ5s ,OKYkh]vه_M(yU/}`ags/yp7hs߬+]\ͫZ-F؃WZբ >V(nj^j[{5ͼ- +#PI1$ Zc@K# 07)J +sE`q u"BgE`4d*}ymАҞ );o_|ϞIug oGρoQ*UWc@U%z*cwX 橥 >"~D.-&d*դK6 -2˺5hl$z=n/6>{B4uhflflFaWfི :E*D&"b)9,}ٻvm}'c־_LR7L.+?Mƌ^2.;= Rj̀xEoX]_1V~fc =)a(cQrO2Y{=_>ȳѦH E\AVf.IF3V#Lpj%KLq̮tcI1  A22vAŬ+~w/{vC`hڰ>?/)xr-Qt-2mG݀13U 9dz5Ԍ8) b*~J1c`#1-ZԂNacφڛ8j>2>5 *ɞT6YIT~Z?\N.fe(TK[K!QU)1NdT^eU1Tg')TbTd"a LFd va.sؾ:qӴb{ӎA|#j v3&Dh(%1- EWM iwlv-3qaXȱW<,j`k ً*ƚĔdfk$&Q$";x3&x9:Ǿ=#"8"CϜ&ɆdD즺r۠HTOVJbcD.^Y o5*p dP3!h1S&f+ȞDͶgDM'eylտ|H޴d_\T=qqŇ4>ڂ4lbʅ^hJ9:z`Qv$bd4D7kP7g<ܳ+@غj}n|4xCNuO$1@ُ _vͪ/ Emt~=!DoC\[ h[SE)\P0&BF"VCLZ"$$B%+$BPR0T%@@[Zdf$D78"Jg SZPRHzFokRO۞uo<1:^]_~:GxAEO<:ʾ[u}k;;?Av2eSj4@t$GW%ؐ]I. %HGE1뱝C/֬kr3iKVfEӇ'|6-jxoކpX,zO[F@лﺲj]}JKrlI٢PL 4Ie6[o^v61l'/1tjGc/2H)Ӥfx~~&ċG]qmGbAc^zE!5$W?pCtǰmxÓF'$kR E(2ꠃDdBN@Ѡ2E& T@>bs<7tX;;[z,ѪD4VVL(z,֡F2)]- =BcPyWw2,cCS j D.1+E3SZzsAgN1Ģ'xc VcJٍ}qjz)t,JYuIB8ULl'Ӌ*LtQe8Wr11y%FCאI`lp2 i)uX9Az" > c*RlԢ]IR'oF֑:;`m3S '0?:aO,OJ;8 ϷIUߦ>^_..~oPl_\?B9S㧩vf柯ƺ50SJ9_['.B%^&~h@Ů/>Jn.&i֑F~hn5Y/y\vrיѻײ?Ճ*ͮ|vzYjI y:e:5;b$Z]I~U3Н.8_XZ྇~Vٛ3 Xvj؍'ɕ897xg9L<}>.?[oId%> ϵȃ6Qg0kqW~*^q"OR}X_/}b3wmF_nO6UA-OfUןpRŰƭ>AUrXŚj 75;^}eרU0h?4Qک+wZȑ;]_ovhT}~SfOΚ_GFk>fiخ?wiѦ'*yCz䍰`ŁA[ 9$M$2<1hΑY[Y&(`$ooMY?JTP YPs\ HJjB25EPHl'< K'TUPJ$['6B/Uc6P.$R%Zr%HP937XZ(0$ J~sM*ERr:acX_<άiH]tvW̟_Ueqn?*fGߩ]qdGaG 4OI^eOֲeku`BQ<}X.k?6S%µAk!]L#Kz KڽLeDOiƬlnT;)W@`~<3tL^"+wUp Sf.4lOZ9g*i'E{>;}9uqcnΙ<'m0IvT5uizy;ݑZeL{2~3' Ҹ~VGR힓ޠaeS֗CCy{~Jhclkވ-T|l 3HDCGK&kݱJV)7cI5$6J \Uq9:9tb)ވ\Y2UU biAJrwW9"b<⪣*-UrcJ>q088\=O7+ճ5@I0p<)\gjߡx#l& \Uqͮ'-ɡU!\IUkf~1YP톽i _KOkfB~PNei.%lx&JLkږk,P?wEiҿ׫,T#}{վ٧+kuzUۥ*ӿXCmrI-G|#X$NDCs7/v2yci[_Lkq_~/M<\wt `8xbQH۷o+բ7Z UCy=OZ;xRݜT-rLn l#_\$<< ^%^2~yȚٝyٖ垱{%Ew,lYV?$EEWV*)wV]8y<]X8֌N˲t]uJUWg+)HW~k^7(}'ҭ:G]I#]0: vi+PzફѕFo`0<~ca(-u)_uuƹ0R;e]u\fjX:e]]"UO9j7gWhTG+7CWncȘ0 N~ep.Qtiu)=:C]Yrf ]3:odG˼t]uʝ٫HW.%-釟nݽ.2W?@{aa›۷Bnxr6і'Bx<Ї/=pu'Gj[LSugrLQbHWܛpQtZ/a u-IW&atqEWvUuu%6X馩a]u8MSu_"u)ݚ]V4_.Kn]uZGKU\Up^8YUpBf`, dh*z{&,L;OEkOyάft6=_dNK~vJ]ϳ?Ewŝ'=]Ys<96w{v"F88~MwiPƥ'jY4jp@,`7:QtiNV]2fGFWS9TOG)QW+]at\'<€K(E㪫3:?HWV:%DZu)w|t6#]04a+g+Pư^=G]H"#ݯ `q+,wڰ A̚]};64S<-?ʉwͣ=Y+?CW~c0THW =whsgNI ue ;;~@ˬrŝx@˜9dm ;[9e^#aم f[䯘\"_;h/vVv:Qfr >LL grl+[ga:-SuuM~zDY'EW Kו'cfճJFJٓ?Naa&eo+PzY'+% zSu0ʣJ)߅)w.:]HkWGWwNGUWgh2 e]7/hOydҕrilpz[gɟ2םq h2R?YfJV]=a2(꾫y;GW}bho%#[x@˜T##&'Jf~t8~J=py ]y@1ۿr; q4 \&EӠi({>?M3$="2c ʀ6YؓkVyi(]ؚqtqik^TGɼ u%^+GcU%EWu)9:C])ZXi ]y]כaNˋϮ:QWAtՁ% hOUtduEt`qՎ+Z`n7+=HkWOVJMPҗ/V~旟:Pw7ݫ?ߦwRtWg| k!a?j?|onKcdcin uLvΓ߄i$b=4zⓐӏz|BO /iC _ o֜r{uaE kQ:uK_DK_o]DxEa4i4 F^@IV]8i~<]q`0?^Gh 'z]^F `qe]uZ]dS u%Pt`q]uS16,NW]4jqn/3n]uZkNYz]@bUUEW6 ֵoGWXN5~0=c]XWp Σrݧyqa! ]UWmz2t]7NfꔲueLFW7QtxyV](;,d[뮶ee6# o/ WV4+P}K/#"l 7t\;Li7tJ׼ &i`%FW(Uٲ|t)8UaNuc|W] GD`v]W)N]Ǫsԕ7#M;sGQ \x]ү9?\8mM'\w{}}K do]yu‹/˗AѿsnoEѾ/zdmxB>}ծt=K/6]+]_ߔtuX7coo_pzBE㿇W%βa/__~Vz_|=хp~kOï\g}Oap/ڋW)}4⇖ھ}o~x0>3>c'^[=\PAE"Hղ.3p E N\5HF ȟ=?_7܂ k~—fy*Y/W9d9L$S# &OfYɍ*' Pr{?O#Y߽ >~G{~xm]_ usySmI씨i)+YM;[58,S21h S1x;5k9eМ1+ )LckʹR^X'5URy_(lP~\h&S&mXU,ʨoɦdL[-(sbTK1嘴Zj j 9Pl\E1jԉT)M9EϢ^ZTmp1:͛BJ/Ru7DF7 wo7ԌbDDh9{XBΌfB4#ǜ|qi.8*{O@BDތnZSdv1[c(WR] ?h:Dc'4 c)TBȥѫKuN%NXe_ ߚЦ!QPq~nlwoSi- R![CI"ü KƜ9ikjaMM$ͽ{U!5-O,ڤ Q; }DŽؐ#6n>a&@v9pM% A4a=~ <6adrq"ADHa5LSQCtnAc^Xj GY{LDbx i :_A!N9cBJ2bU;Fu!))"ZZy;Sr 9j[ŌI*{SDGlc&e>n1#I j0 hCS/V̏MBkD2&Od0 >~! m_>bF\jb_18bpr1AQcSSzjDOy+*ؔݵ6u߶~ >dZջl]0L!ZIh|TxKHƛ@u0>"K?p}U29+RN!9>HKTLy` +&_|TXL3B1=h5'\0ĘV#-e>-#]ˊ6:zO%$ kRT_ )k\@vJx )A qԆo,fuzLTC֏Qߚy E*7~V-1/k$SamDbU=uq4\(LmFKkbK%iٻ6,W`?bR$;x0a&Eі(6c#ݬ>UuԽUtu½8h !&*f7KEmnP+۽,C~YBG8*8QX3Cdհ W%lvwUis^gFŶD#[-mT18t[ (?2Ggu]`1AILBi4 Ap2!vGxc)EF WE!A0"oY`#́; S.H>K45ۜO?rX^WHڲhR},F%5 2K:޼m@*gr]2+EX + w \̤E"fM:,21;:E?Hc>xy[O:GMyqmGIpIYk&nV  IT\}0=RԘf먵.h֐Dk sѦO[`rr8p ƷӘlgt+3Xة5,J1<@J҈j0jлg=kupU le>Y@sumX)+gɈ>)FndjVw 5bz |,|r\gR,J LGjJҸh 'е-kk E]>@ uMgA*fQUq di»28ťpAJf:t>6 WQXS1F0#Jl=ZMfҁñi(14qFO*pW /a Z6ِCL\[-:_!b] cd4+MWx'\$CdM]'4\N0KklB8KLר"a!TvƂ70&`pN?LM.nNT.7LCs &58ܠ2*gtI9{ѣ[dlJMs5+Ҋg߼y~s棈KJLSZ 7/uqċ`UGtU2Y6}$zb~(bϹg i>eWM+.p6%-^٨}ԛf~}Kh/nG.0]]'AxW dNhBnkۡC ˸1GJ ?OpUo-\/ut%PE)NP dVΐ@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H tJ ӾWJ G 7/h@ $ђ@R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ/W ۊ>) ph>z%P:Rz@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJU'%6B(W(VۣWq:E%N;H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@'ZI㽩V<|hRSMk\N}N*~D +$\z GVp (%)zgeGtUkp ]Uu#)ҕf wUjh:v(e@twYw ~C~p{X faj?]=]}lsGtUp- ]U^;]Z :kq/qwɺn$u{fCI>CCG: *.櫛ݛ< 'bEI/,Wً'UFTt|:.>Ԭ q:*&}<0*f ѳz 3'rz01_}kZL h~jޗ-kQƇ6I(e?dy7G[㨊,fW,1E{XnZȦp6*1D}ӊD}Z/+FҮd)Fj,7j$DׇS|EY!gkBX<ܴJMY{Mo%DUâۯ}:"LbO~lX]щc;np8V|픑G\lTo" DrWɉ8+C Fr9fDWiZ*\+BW>;]U[DWCWsCW Ƚ WھUEk#:A^;%zDW)aPtUJh=v(-%Zѧ`7tUxW>׮*JNkWHWq  +ھDWNW@8#:AL {DW,M+\ߛvUB;]UڿRú^3 w^gTަ 0kW{ǥOVju-t=Z#,c WUEPz'NZ?xp ]骢Jrchr .b6J&PפiT~nk1D'!:K5·4g]% ^!2~YN^6j3j%/٣ޒ{7TNo\z7Tǿ*SQNoPB2{DW̅ ]U'Z*JcNɻ`-CW}+5;]U] ]x+`ߟEdL*Z*JZD>I3{DWӽ 󣧫Rwut夔VX,W7tUњߢ(!:A8ާ| 'p ]'9퉒SCW1ńzXW<0]vCitʼn>빒v ]U*Zˏ*J牮NLc~A~[I?W1yܯ^ߪz&gY-l^ݱ_'-/ur³ٶ#937My_"}P I)F4J$~jXUՅ>-4[ZL3-z2k7^E.W?y[#Q)ˍdQ6*q&Ъ|]?W%ߔo|c«wWU[i:/´._%ﺤ+pɛ;{p'/huixIQ57!`1;Zqłf2a |5q/f(lw*.fK%L˅7"(QLb.X`jp=~Y+-hCj0zMJۖ<>4_< :}W /i^zBr:^^{oe=sX\ḻQz__0=^-[; .Ջb]kqwq:w#f}{*5y>vF̿tQ>RbO2 5 1])1D!-DOݎe]&y?_?W#!h~q)o< rQe+Jx0o_릭+ijȽ8Ɲqa8ِ]cP}+4.~;3}o#wQ(q+00ZF;Ɛ^4Ӷ6Tu?6.v]!I3Ok?NW?- nY'k]th޽̮!5f˼y7gZȑ"2E~b >.$زGbaɱ) qMLS} 3%zvdzӰtϜc p=9 LIx ';h1j^a̻O=pnw|nG5gE%wf>=X$Kn34N(d'|Q DȐJ=dۧl}.)T{a@ÀiϢzjt |<󬍪e5fӶ\}|:qb]:#Bb^[OъuZ 2h)9$SB|O*D)3Vf{c]to&l}cuyMo(-˪0]ƻSW_u+ǺWxPê?f҂;M>i. @tsfgW_O}V\gYNq2;Mt6]O.DQx'ϳ:b>rddySlQ6F_9=a٨K@( ϼ)&Eu@vOv2l_ۻOh$܄M*cN,wE|s]"'*BV*!zνNf])o w3r*C{t&Ok_ Q z锥}am[iif[dy9_Qa–S/}|xW+SĈ"%^Æ1d2A(erzA}"U*=W&sqnrm4dt-$bD9.}lJ!BѤNy: ɥdY%(' YvQZeђQhSŀP gƬx15#@>wfKZC_ nN \sɒ IڒZJV"C ,ВW^#һݰd HEA&GYEy+.p޸ L~Z͠V] v`'GV%&"OZST 7+ y I! FpHٞx'őbgA52eV9Dcʙ6zV#ptK1p[;v3ɬЋm1!EPPŘ8w:Fhd6'Mbd#D' ю=V`ިV;ܡs7ӪXSH=~"º!7xHpe@dJA6ޛ6]M:IOxz>Mn?u&v` $M4qiE,%,sbpn n ^=E!'#l}(:GpiK|9TyX 畠!Xw՘nl\TFXt0Y$"ҞL  CBO#4uA-`,ZcL6>%SG„O*1 ><\mF8; ;,2ژ2! 7ECz_ H&&(,rutxB.@ d̪4 &CSN^!^->IH,b$ yڵh lc蛑Y'WW-|u#:hM;|%(/9Ĭ楓j 䯕En>t??QC>>[M8y{{t"Kv3!Ws!_p}?G:nj"ϣ;oH_tY-HvRNn2?g,}8l胿t4e1N6ҵ8t1tӖ>vFsȫeӲ][aN6,s~K'?Y\|Z^]wd< _~i- ^ vf"F\xƫwzm3G]4I*(TT`Ird9΢7&HkШu,G﹣ z=rt^8Trt֞ Wx ΁+IqJ f9lM!!9cBEL!F/B]lXVjl&7#@G\6'x6:f.*~缫tֶtm\?`=j{39=P{^hçusn{#r'iV:vݶI~$2o*$ݍp䮎f [uoj~\oڮOkynp6˙FW}-=Oh w{~랉{Qs; AC-8̡=ۻ鯧Z,CY\W+${,|n5ʗNti`AףaZњ/+>$ `Vw'b{_7W17)6~eNGggqg,-di?װZοt)>qiN^4 i+ru{۫ ]X|R8|6![ O.gShbACR*,^Fi+PބVČFd1[%=A7,+sɲ &+0MEw9|vQIm!(CRLs ,!CfKݗC'CtsDQ1;-G& X& YajW>%C^6~GwE j\`'cƬEogy<Ѹ9>Fw"wr˷շV}<]wlE)RF7vRRJ 4oe:vpLoXXm &{'8ª5eEY䍁ue; WBZFutf T}P鬅J`օ0Ӛ3F h0)!54}}0ICX%* AgN(  RVlS|h09Wd%m<;/QgJ&-Yv:qϢidLZFa0ޕ m-8C$|MB1 q %sMEVѣL<]0%jυ9㼍|-:iw6‚-dlz_|e[7HeJCp՞o)LzH:ՌgAdGkgC GksN[E%nSQI?(QyW Tk"Z gx"rCR aE.619idƊ{2UNNP@RFkÓP6s9Z׹wNJ!(tFQs%6 KP|ʦߍa2u$&q$[6qH[Q;j"X'{D}z먤EZjjMTqRyͬ̓7{gFBMvq;T?T? X ɞoq܋0QfL:仧jHє,"5k"MUMu5hW,$x\rL ᝜{O3U8j mMI(㭭MErɑI|N&'͵+!5;u2vt,c/:fX,<%~Mq]s_dƉ-,2++"s 9 tpSj0LsAzѭ/+,=!ؔD AC2%S0c p.1FWrN0/:zAj^2!qќ3^KIy76;@-XȱS*Mg(A$t85:(|TTbl:۠^MR&0ǾDD%G& Jn(K3 fBPCFS":06 $wE.Q3!h![6F Hn39h.Pf>:;}(;8pK\Pڰ1lbELy(ps U@I$F.>άc_Bo!BȽ{gyՊkk<*˵v R:vR od|PG'ീR1B cJ _|RL$bҊJbbyb) kt;WtɡWgc ۉOQLRaB%FF k1v RlTR^ sd2pڶ#x(ѯ JLf ʹF<-br>$ϧ ٟV >r'ߌ~^tj3wH[}?rtcA?,m GLJؼ:8gw~秦\j66+|{9o^t:Ŷ#lsp=UZGft=&e_r4m^KHweo >7>B/GW63E78N?ͧv{D7_p6J?-gnrz*ubHo etDsx1ds2b/x8O]VQ Gr<(^8s>⅜ķkSzKȳek\ /spOɢ/}{hю7?mx9]?]k_7J|=֩plպx ܢIR\u2<_i|G g[<^7h*݌AR&vW=mUw - 1}l_2z{,qN;oY7Gi(|qŠ9im6Ei-"P׈prDM[#n=Ǹ]4L+N</KR*Θ <0tg(RY. n͋ͪlbώPRve߬SRν/_1sTcpO0@q%|Jߔa>zW2T2od,Mr* e0NZTa=0YfM* r:&rdde*U%4PZp/ !tEV\A3҈I{p<&JZxD$xN_gro^]Χ/bQ$_(t_1S) gc)&2$X!@sd))&­39 4q9)Eb:J K~rn*cf^"GI,:-CYaM8/*qS^Rj/CBZmUD 駀怴5(aQ˳H 0z4Ǯr:ʵ̵#^`=& x2hH* mYMfjٲZ}߲T ;lY}[Vk-,gJkZpErWVXw\J WFkj ղ\Z++T)pu*SS0fjpj-s}ܮgpu4rFX+ v UP H}0H*lvUz1fFij'j7[ApZTnyWbվCύ]P\\kB\#dN}_/'UvG ?Kad%x=KcPjVǺ5|?[]zYSί`S -}ڀ)#ax?ğn.dm ߨ3@_Oj@TN8efg0 'oh{Tv5[uO[DmD3%Ϙz ~O?[Gp qv=J L7xY7~ v9(jR ~4loݩW$ɵ\Z]}f + v\\U-"+R쀫#ĕr%Vkt+lY=Ide-"" \!3BpuEʵRԂ+MKz+| ϼWqWwh[ PP P{LqedwurL V+ɠpr ZH ޳\}3z9fsNzNr:pw(;gv䀫}s` * Q Hb>wgT9q%1V+W(+W Pw\J)Հcĕy,[H>/i)ju[4h,.e~y,̰dE'f)KFBSD%5(h«xu$ZkƋ|1+@zY UcU(k{}$u ۅ1nU%U7Ci 2Z!&ۓً8D|ɒ3XmT H)CT%E``Fɭ=[!G\)w&*ǟ7SHN x=yr $H6}H"c@HW(ةz$Tx}T*1qE^TX\Z<9T|JKAU+\\)"+RtHtxE" ZV HP}TC1J%TM \\YMjw\JWCd:|ǃ WzTj6xa7ZJ̀#ĕ|Ҧ7#K3 ۽ޠeØIIrm5OT˥T-;1?_ifKЭ ci} F./yyKŘ3a0MrӤӨR05`1 J +^H&LjB|7Z:B\)x[3/lT=ϼHƻ"VWrxuX^{'Zԓ#PwEjuqE*pu[35=7=ywt-"Iĕ1\m mpQ8hsyK.OF::+3ExNs'_z~Hlmzfwτzh;99='p>Y rz)&K( 13֊NweGtd*4іodT=]rܐ]Z3w<ǑǣW+tlM_˵v{ݙk]ML;r-j=KYM"_ [YO9%ռ_U:n_}5g8XYP\\{> WBW #^kj'NnrՁn*]:p\;c* 6BW+{ݩvS{ӝTZ\! ~XwHQL'̰̮Rv2SP6ղ;.SQ5WZ;2lŎ(׳2^僫3ϤKf _pb^Q$'`Uɡ\P-HUjgH#9i"\`#ɵ\Zz+R݀#8&uMP0zCNZ)+R jJ9WIL5"ւ+R"f1J;̈́W(3V H`BHW#]0p4Mr]%|,xr_⋧8F"ŃKo`cX*VyURrun±1)e6.98>.4~^0yۏwH_c}pܗdC5FOzMC>l^n78UXU{sR_=~qTlqVoB=o/zw;897wGTIQ1S%V,+]?pQEwj/~mZ j'?O~t-_1vv5i:p͐<=5#:MXO\;dn4nhs]>%xf뻸zW>}}ۗ%g$-ۘqh퓟b:oՓ*Dd2Bj*i RExH`bʠx+:V~9aj`&~Z݋B_cvL*A)L DL"$4*2HM $3&'E^v~h:@ځ-~zX^.bլ\D .7/m~z7#2\?ϖ{s?aei|4_l:򵣯]3߭o6=o,Ϯܯ>ޱv0V M^SXD(>Oqf7&{;422@utB "O$3( gd4E%[*3Hi@!r&Zqd<3)՚1 b8z2v_?{7Wmeo2o)So)]m՝to}OoKhbtS4Zimx(ެ)pUw{kW?0j)c^fR0e'n&foUU A1{Yvq=;ϧ3TO<{=]H(pvazm^>Ib_ p9ۘLl{ĭ8fh7mޯH8?md&1?S}赠aYJJ:7Y[ Z8z,G{ᖣ,˱[5[p~37BvuY 1Ok3ems}EbT21CCBb NP,"R$)PH{9XvLF96n9h n|Ə<òymZhFEr ~d৻N'y&ao9ປW/w|Uđ_*adys!,%ἪuFy;{ n$eQryDZ_"BtYm,vY7!OfC=Uln0m!C_|=o 15\kH0d@];zIpIEC?ӑ [mRRMUǀ-E1 Jis3][Bs @~"-RkFOFI# !S(ü2ք`C6r j`ms{0QcƉr. IqsiB(|n_#6P؅^9ɀ\̔ 468b>?Zu}L8IR) J&ڔQQh~L `D$ {(** C& Ar!4!J9(M\IL0KKt`K1rV@r^X.IXR14yc O.)"EmJ<] Jk=q>q(EΑ$’ yדх0ڴJH+=LhI BOXu6Dϩ#ÿ"/{*4&`:g hb I ܴkp NGq1vy!fȎ}1B9K9.oﮀ{.&)x2DJD *E )ZH-ȣ/t΀O$򢳜sgUMt.q*D m6gl*e$ A@wFZP-! 4E0KmК#3%A"jGs@m cb4lv\ҽnzjp}X=2Zsr?nr~غE>*eh`A 4S ҐxA8;e @ RRJR[Dx[]ilmr)+BQ;-(v#` 훗o Y+. dWS"&SgTZg f6vkvǬr*u%FcyS ᭪1@\$'N sF>F>F>eKQ.52#U)xHQx3PZGDTh%IiM9bQ;KIrn!t:ik@DRBHψZcO9"v\1rq%t'ށW3\|߼loBӄ'>'Ԛ> SC!T\cKq 4'eH+5 5ʌp'cME;k"&OF+!2p}m]-yø| ?iF4(tIVa&tMyO?&.`kFoGZvmwa<8is {g_wwf,|÷Z1T'<]nn=z!H%l|>Hr>ΫOSBD*ιXUY';!ਭUգeDYٮ]@}4_n=z!%V# op*Tu]R1ajQJ(*0?% _T}ߥSBzav: |oS$ tV0N{#"G }Ig|'ʡ2}X}gl"/G1_bj-{`Z&M|:7t[‘g ^ʁV@%X:n7uJk"/-GQ#$Zʣ"ހ9˧2ᦣ`85[4T\MJpD]Qڭ/{~*H: 4v+ŜCU$ʑ siA:#B tQAhˈU*d}p,i"0O}DnƠdpZ j1r j|_7QK??>u TɉTD)Vxח!{U!E;ܑx&(DMStHR .Y#3K))1p0!qPDeRd IIAQŁR) *sX3*ta18I6%B'Յ5w>< aoT0M>7A0O,g&rY/i>Qy%hq(+eI4dc#9J{%A3tBێ=̄ªc"^G9d:&hbqZ{_{uI^[68R@XzMfr@Z>LRiH&RVQ ( #(\H2Fua1rƨ_ɊR4b18U#׈81sQ*n)8DHdsE F@8JMJS`)7"D=ZLh Pl kb&R;ԋeKuS"/Y{u#%HڇH`B(JLA1׋O/ES(O*l0l{_<6{Ao8PGst#~D'㪓ovQƱdDWz^ 2y`pJ8a*CR!|@RIBRv9'> q%($MERM"b $"z^0-dB˖Ss#$ eV1@&<1hF92vi=pAQ+̗ܿi߷f]Ymk1o(=c9Əei,Dr锨,䌋6 'JzHBa}9"Vkt7a T sBiju)paV;nmח]-WT"嫩A5R kPd*kPkPXj(1]mKE6j9\l"J*glBۭo/1I=[rAoI!S5I~02BD{9j5'+kmH ?K^v/X ,.F<_0US=-۴ИlMv=~U]΍";Q{cYecW93AZҢ4=4ێp(2qIHJ7VoQ̊ ,kϞdvzܱ,;^^r|cM (MvPȨQH:1HQ0#UL7ߡu:Pۛ~۠ :Ea4 c(E&dc2['Ut֐t @OT#NC%("S m,a1|mf݇9ʖ^Ί( B^ 7Fe0n W!jRiF3QĘl%>Ѥ^ \SCp; ٘>~ikO^BMq t|9[IZxY}CqVcd{׿T~wx:dH- 6b{|uX>=>4uA>,H|ֺC 2cILd%y"n9oפ .[d|"6-] E}*:~kdG;x9i 1jMMgѤ:±ՖîhHrS5 +˳[\=! =ڷzDsA0>+DQϯo>[*]oZB/;|}qߨ3:kakXWyH]0NvonƟI7- >;bPrLZh&+ʹ%,"sĊbp0j>JV-!^.Ld `0.pxrR*Θ <084TȚlb~0IL}# [^> %$az'dV?O<'7&jx_Xkj`),q`~y|*]efϨD̚OIK#ASZY\^A_d%/b::'7.Hc,'Dk^g`Vj%U=ަGεy5M@/ŢHL~ K~"bƜC[ġL&\QJ_4d|lpf1hƕ$'A$ :p6rLf[MWULGd^" "XNZ#ɬ&{JT~j/8L#BI*ri!W r ikxVB[Z")1Hcc %j\{^{mm;w+p(y|@lh\:]>P\;om-%Su@력T2>7{v8WBG,gCG&7hݓZ hT>Mi(IeXAWgJ!] %=%uoӵnoaFS]gŴU^krUo: ;JfsMg[;Geiyy(g1+eqLH<3W2C%T+pȩ=4ZHs[Pv_|է'M7.pZ-i\љJ\98.EgoF~6LMgӎ2Hs2~ 3Qw~[:n(֭^aASׇ1rv[ۖwx9r<)D9{eT>KR%hbԽgك c&mJZpbi: %*++X7ngMgO>;lV ;NJpDSXJGM /D3zG̓ @}b){ඉyԴrS!KkD0+d΄ Up jd;URDBs2€.Ct1ȴV)'0Px Mo2HŤL,TqPE$z/D7EAIւG <-dzC-6/CM-ts(S+l Ι"cF"g)X RdCfd!D=`(!{5:A/ŪdwyRCM/0S $J8& a63DlHl؝A ;_æ{lxtSՃN?z=~ظF,f:nUI;Mrh6/<)`$O/u'Oo# W|Hi&g$SJH.@ #QZG8۟Zq_ŲJZ6߿Gʎ{r2 Yqv$xu'ټ )5NU4A8zj靮O.\BC,? 07͸Eijo"j˫=gtR),r?/~57Oͭ3LD'`kkӠ{q:G}{߻}uxT{:1[S"~rE4{!OeO6}MU-U7JB{na9?^Xw#!9 1˹v11xЯʯ ]U|t=  ۄWkwLH„^LO.ȭĝF%JTw,JZߩM|8ϣ5vEvi ?wi-|sl6o9{ur͡nx iS_L x1 Aճg| 3cFksӰis5y'|1~ yJ$K~@ Z\Ƥw}ŒЬ6%{`a{NgEDޙXm4'Yjw+ՠa ^֝N2iXTt)IEstdGsٻ6r$WYd,s;&Q$L{(; 6XZͪOUIpAq1br`YOzj }ϰz"jOVb3$=uIRP9Ƥq&W50nP=Ƒ57l9Q5cՐ7zӧ7zrFIn % S 'SLTUi ^LJh, I NX\)̩UVUW |oNoWW:J>YY ?v9Y^HgcYw,TY)vmc6yYW.X>Cy} /^e,Vc,__]__>ۇaUFo;FK.=|y'}W*V/"kQ+<)m^L>t7b{ӻ6GE/j&|%3I<\߁,_Qo} S U`}:5F o7T)y3 (S*q_dઊK'S➥5jQ*%pJW/U+<>6urR V[sBpON \WUʽu\Yp+t~Y]E7;kXV{_>'zC/׿:~%\WR]pt>lݗ`7Uq3u/O狍0jF1B\|;d]ym{u ODv{b6$֚ d)% 䤼!/Q%PRd LCΞjhRqzsѼv)-]|_wF6ZFĈjq=- d5" tsLͺȘ5 d&# :9CgwXRl<Ľ]4y,Oq#'ff\+"(0$re ٢W18RX$ؔ%ppVw͟_Ԉ8;Mv M޽ʳɊi#SS&9CC_fgwUҳϫx?ٛ Xnxg.P{:4t}>b̫dlRBjQSv:]ZF5{Y֧n$]HWm:]ڟlGDLL+͘Y_r>1. (D'Lؙ̼ bZ(#z4vo,WC`Y5 D҈T -ZHeMaͺ#&q;ӴRG|NhX ))J!#XddJĹ#tB=`F-eˮ}bnt t!V5|(&Ir90Ƨ5Ӿzx-Ǜ<];udPHF u :Y"3y,Yg jH%rL7xR@̆L0*f]YCSis9]$ .z-R!8@ `k (,w; 4fy`OBJp $\\^>ӭd)d2*jxtmxoG݀]k$r65Ԍ:) P{r.*e/#Ěh[{jA`Ps˚f-2W3f(E"ɱE"@DV>n(TO78`\ɓw˭j(GS|,DMZd$ r-ʃGMuuLjAMѐ-)iϓ@2XmҦL>'*ɀQ@/ R8#c; iQXk ݈fׂd2UmOvy_tnz\}-1kQ8*QNQ(N*G&k`'L Qyٖ*R1Tg')TbTF`%[طKt)1-^h%泴bi P{wG{@ui"BGsz3BeL2ܶCH[ff^tT2$TFfFđ1!Dd:[a3qީ698Dl""4FDqDOv&w9 DgcMu咧 FTOVJbcD.Ma,o5*pXPɢ,gB 0ld=iVy %mcDl&7qqm8Xg3-9Uc\#.P.hńG IdS.&K}cvu`GI"&VF\| \LVq,bc< 잞>yo4$O:v|@ܘ4ُ$,lx^b>Yd/".\$Bw(:k|?/ec"c$B NZ"QO!!Ek)?LZ*W\Ԍ D@(DFjUbmA%p*ԋ(-žs/LiBi HggdTz0Q6fy`""]/ (7ׯy>߶S.&;s*0-+7)J2'iK)8J]I.F9*# g7鞽c]ӷ˚&au߀1WP~ȇ}gwI3ޛl$~>*wm-Z+٭k~j. #eB.`;m*Φ԰(%ng&yqm ө!4&4[+3j&^}|Zz lI Qibg RvDE <t6)K,Ox=<`,R%f<бܡ逊:lp(r/Y%byl灄TD3:Y) goyٝA Ɩl XaabJ6uA"C' h "uCW~ 9{o윝[6bJNXҚ\ђP,z,PGGс-"{6l`Bϵ^Xq/,otXƊOmQ q^)[:Ct!XV`DP9ݨO85}!t,J$!*6ۉJB 5]TtY9\r(&hHm 阤jYNK%b 1dLŸpޖBQR% J%2Er;&L⛴e_L~|]͖5斿,t˫Y TuۧUr]jh}jn/&L|Y~7ŭrK-K!O,M^盾:XUc`J)cyb|3P'6k|y=^&z}_ayN*;cx{5M밎6k2&w}ǟ\"3zOT6zP嵏W_?Y7m/p1IW~:Y^\p,Em*w?^zJJR#{zF<bw|n]Kkr߄rܛCIvoӓ y`&4mFb.%yðZ:Gfm# C^%*(KBT<`9 HJҎ4A>)G>S#Yň]m˼z{=]tȊPJrlTKB΃OWܺcp0]rO3 t;Wq;UZmعJi6v&MӟޯmF*!JJN;v!2)S%D`*|s Q&&#rPN*J/rYJ ZF754MWNe M}(KADaDltr&"? IBc8׹]ϧWf}Z(f]QJD~"B,A'RB_eL6ѩ1.yc 2Zp,_Tk &\|b%㤒I <) N`%qz;NZזB*>zh:$*`)H-'sE'>8 5<ڮtaE*^GR'\  eLb8؀llc[x3&>=^Zm-Oza!otBb媴ghΈsC | j$zsXVaj/G.ת1B &#HBiAI[1scz +u^`|M BQ![*T t>93кX+qgViH}:.y+Wϟ[mMY1c޽)*fG]qdGOĎ+4Omzշ6e.;ҁ MD\6`Idĺ L0?{Fr]pFn,]dw twu[%RC^~3$&`-)LMwUAlub2e^Gӣ]PR1]=pvZG`AuXA}[.hwJӾ˳,\TBKkb2Z)C!{ "tJT,x%FKnER!]k]_Ncu4Z*'qTp\3?➐`몺u~kk۵ua48WʹF9x{3̓?hzorV!P{OzZɎY?_22%W>K𨵬[LMCz0xq Pě'>$yrjhm5۾=Exc}[GbIPn?shNV/i\6I(MA#(#&2̔ J lYZiZ('P+9G-!1͢j/NiN #q]֡t)q2LHy'_MIoʎyS(@h= #Hm͑eJeE4<<)G^rI!Iݖ+-gk')T%" $rvNj*d#K\$hA3Gq,ȑ=S:g2LMyJbQ| YΊ^-w C QSs1p!( S1?+#tœЋ@I|)P6H\TP! )Osk-@Uwi"pŹFP N'Vݥ换hqhOQJBy> 7N %Hpn +2jRDlAq)3J1_4:/1#TE[#hE%d8 mZZn ==+"&|4@*7xojhD0>H~Bqx`Rt!~NtҖ/CfHl35@",`g 6g+n4}n2&Wo >fZ $ OP iM!-}٪"FHדzzR;IhI AW+CXL.(넊 (:1m0(rY M%5&H=P>5*4(CKF00-Fg^>3 \uE,ݸ8|mL5t=yj͔ u bzLxKeLqzw7{FL(:9>Kk LA@=pj~qjܝgqߜ'?A:DYfZH#3TPJȤUB * 9(%'h|S&m`U  Cj blXyk]/&߮M/lSNjrz~aYo*;~?8iPn|T)`՚' 1xV ũ-V)yЄ<)V5^4BC kBz{Q PsxZh)@VB0!넠bJY;"qFr1Lfin)9S1;Oh|b +?σ5P%"=uȺGMnjx]v5Y'Cӗx.Fr(wEMz"Q c4)0%!q΃k{SF'SUO5K<*E@E͒2A  ΀ Lڀ@לSA(E]͕S B!bԂ&ƘG: շI'}3>ϡ!WG};h~C|·݌a5j%,069ǿTG~,"o~i~~ύī}ln>O `8]dVݝjwCfxh'OBP;-Y͔8c,J'_Φ{i11S{&aa> 3^CœowYw6>rcK /abIB g 8{\Dr~)'q7r}owRL݉JKu8Ch(S#e)ݨ9k[~jR|_ W\M"0~_"jA] Cz;{ly?.x&nPnXhTڏomF|S}pB-VmsBl>M[;T߼ `J*{`[uX]%IzyV)]8i^eV*QR**)juHd@>}Z_c@kJTK@$`V;N*P\[Y8$gnHx.zGOG!*`\g{mf:۬C3t~j7.R79(o:_/X%0zbAx  @})c>@2bFFST2yQDK먋\8 Πd3):u4riVR x1rq x xf]ծzGnD\~ttz[C7(%iGyO}{vho70>-s"6wg\9Mhz`E9s)K |m25&'j/0de3 n @>EgEГsOY/h3nYdl c&HR 4F!V#!%p!ºY묭HaV2p5Ƅ̽?J3vZ~& tSsG߁}'h圠kUn{\aWT(ԯ\"#'-SqR:Ju_Хo Jt3pEJc"=\Aک.qYwenwnB@^Lnl {5a8rgrgG}UMe5ZN0 B.㴙z1>#&7Ͷ#>kaSR*Cwh6a;''*a? ͬBUu1幐1pD mce*˜ȭpP'\c`:s}@u2O&ᰯ\)*rlTϋKmy{KqX[-p'YMؚ0 )8GhzF6\N{yCu1ZqlX}ob.Y0C)LTO/߿HwsLj+QWôʨoweUWwIɡ]IDӳnZ|QԒ6,ޯ*Ӈ>bˤM2|CrB(kgY`v%&ٶUS>=Aυoh&(?.'tV~wõ%ls}Ntc9!!{I]Yn'ton舒Jɒirp* C2:9p"q] H$E\?I_-n!OߝTͿ 4d7p*0-57Z?\SBA7ɂ!|aH`bP5JkkRe}u|G;5AVrOF YSE)z qB K0Eݪ&drfEmH2A?K(?xГha 2z -jkol^Y~4qYuvDlHmJ9l{9=I_kugdQwv櫙{h9sh_r6탒7^sˆӷ>vHZOӑ]ib**;% 觎RN^1=CTət2rjBf"2:ǘg2&f'b:fyt)($L 62&؞a!9Je,=>+n%*s[^xOOzG]\.rW'n<>-r0IY.7xDFvJ֖ L$ #2m(dE΄P:( 6L܎hZC,hvĹtw(池vkqGރݣuIBYwrxgUIyK,*f H k"išH.≬QHE8,Nڹ5qÆO|ScAPDzD.'$Bn xIIXa#gm #2nƔFI5\pG &y+Ʉ@LNEeDlMO..qS5-9E˸{\q.KAG#K:ĔGŤ6@1KVD'-t=.>. !x>mokqXf7؉Ok4^\e Nțъ{6O;k8tOT?N*1U:5.񸤛sm"c,GVZ2Uv]e3İ4\d0E_kI&ylQZM dvQEA_\eōG4fedߖ\2ݟU fjr"i;~ܓإwTk$L":|>/[Higմ׸=ڄ~uȓ!#Rw"$%BtǞ) Y!,=(I( <,'fIDrRd,EP8U e&f+xnr\Z+ *Y)I=sDn[[灉sB=O^Q#$qytJX٢PaڿoV~+܏|AzL,Ig.r3shu"{&A(ɘeF B3kp %F/$C\7#=HL1 e#nMxu/%Mr XǎOGCt٥ɗyԢӦGKXwmaY0sWV 5mq׍k%xzSy ۘB52jom1:y_gcG:kŵfpN`D,ʠD$9IcЎ~" Z㥈#Z[qbis*k PÞe6Mg;sXUbJ/=G$BAAc<, ubNrcswt>ؒ64w|HX \ĵ"ԆL:Ƣ hBXFu?+< kB➥IZ")qLƊmQPڈ)Yc'$w L E8+4̋bLWWQҶߚ^jkz! #cVd ىEsBdrd&a-XђWgBtԮ!@$/k+9r(9AЂ:w>xbV49 Y~%w +smO6P6*K qUH~m*^&sKEz1: Uؤoo+W/g骚|tii"D:MbXX]jP]2&,6W}WI>˦f|v4 L4~gwƷSԦ~g&մTCgӚysDێ/F p?T,ї7RND]%mޅ@ !e6x67.!ϡBdS_t*f7CMoD 8rjfտCЌtPjDht͛!~$%(# ѽv1VBXs?,͝cb!aGփ %R) u`ڤ[% z+n9 תD,W}vԇb4uh٩ikfo~UN-^EpZwgj(-[߅0hk*}l],?ohinF*KJ@ r3OmҨq)9YW9P-]ȝn8M_i5huiraO>^9UHss%6]u7Nj/IQ(gDadjk l6~ltn/@l:JI_0E+l~&ER IkC>yK2XXH\צp٘s h~ K&(! nBP[:aS8Ib У@k B*S( ћiv5СX/XIRH~@5r-:ueщȮVfEALN6vD66˩^D-hfb{2hXx Kٸ? %+Θb?CLֺt> vZ ŌV<8dhs/ypZV%.rG}g{4FU^ $t2Qr *j !x<#7dY (e:z3cAY fdPҩ%ot3bk%~du!|2|e*xr~4ۥ۴Y3f4 PN:z!tdBt|z7h'QhC$m|> P)Yĺ,*$|M]Q6UePcPR2ݒpufG'rZxW6ܰl罥_p` x L`(,'MmitZo x Bӻ qHoֵ_N3SZzG/4xG W̏<'2>j4׵ܩ>Wz78~st7uD750z!F*ZDUjNݭ |1P,1(U FfF@)mK+\ti-s"նD 1z!RNet訃u 5Kk&|]Ǵi,q2),yO#V6b*{I٠w(S"8r2 +,{cu(e*6B*aVKmfFdm0`ar15Dž`Ԧ2iERM,)P Ɋ%juVyn: U jsvhm }8PT>|,:$*vcO'-9(_X!0x $TphiWߋ~ HEȖߵE*R0nE x@¸ otBqV3C4IV}L+9dɧ(P!pPdM H#&BqqGN|'=o!m|4,2Y~1,BAH 8zV@BeDX-n/z~,LEi:H8 轋1 tfEKf1z$G*09O<:tc?br>[_t!fHT:ʆ0M^l~TdLI;'8o=n OE*[5އXtsi5H~ROۤgeP6h 75*`]QFo`6nf)@fI%fo9W`(H:hC,P6Jp`1m&g+|Lga0fWoϽҭûMӇ$_Qڳ9w۰rzg=.Y1o ^ aUoq㘥B{*CDZywv%RXN)4Cs@Dt,elICCD1?sՕiKx!?g>@%[^X|"fFզm"8m-*"i%mg@j@+\@b$@΃V+oU*,h>cs 9Q,DrR:Xac=V >^,6v.>LꞍ7f댿޷}9_|lUt\q)X|^ <“xVB Gq qV٘U'QyVuJfN{GH D].$)Q{*/~XNʚtQr>KeHch&{VuW_՜|:9=}_OӶ4%;n~ίBz6<1,8|2EBlj& NHG%)>@Q086g:[zz]y7cKFK6;Ulmc U "&-$H p=Ӫ﵋#_8/9 HN%#Z*SJIMI5>CO$/u[贳h~iwE?Z Uud7R '^ x=h{R;x32Xm(kdld[T R1JrD7n!EYZeV4e2֊b(;tR Y٠ZK'K)ʒeR+o?}z;eERHy?'yo>oT'ѻ Vlv_ME,J)͗|19t9:s?N*5_ئ{iS]m֍RYKZF~Ңmz_q*oiP"|Kt|fO~WW3Y\(z/(K^EXqzb%zsC'|' RdT_Y{6>>_ƓsE|RceQ?iSQA:P#xՔy6y*צ#p})Ol*}G~Ǚ<ۊnr1˟ qۜ28ef G%f^\᷃ Z޷j< k"􎍗xkh4g5<,O,Ӧn<6wDPstwQjmj'g<}C%s2C>_ ΗiqM_;m0U!mR^xV嵽?m1QX@"E{"-0c*usܜT:ID{DEc5ZVk@ q̌Eym==:uZ*hK)9^!P$rJdC$-48&Nj`*ﻣjL7؟v1Tdޟ!._{,zefϘy )GIGGI$e#ナJ ͺaLS N^1^,>ȐAޤCb`RXr]*e=;l7goY߁ϦE<}s웵w8'X;!|[`]׫o+E "G0@Q;'MBD5d`Ԯ\x\#[=񁲴E*6UF"CJk8lT@QJXUe:L\) \!gkBϙ):KUwEB! BD[7l;ӀI+PoWeyU~" P.9t% Ҁ0Kצʿ3KAޤI׍jގ~\"b“N9_Ĭ[gi3AU$:9-rKFR,P~U):&)MѤ.&I!QpRWг9%Go< >{: ֩U`هgae,![JW6$]8$,VJ)bt*e@TH2:dcb)ĿG-s3T)0#Fgg&VPrq粋[fуb_:Q>--;/fI_d&͎T鮫EPJ?{+6璊w^WݸO}{^=!wJu595|RnZGw T{\Խiw=/yp V̷Fև?~ZyvȇX٭tωיϹ'. u;vQQo[[?PJtL{F;,o0Cٻ޸n$WN',H   &edYrZv^{zNߖܺ%$v[dթsd3U8UiFDogI 6uP_ØeHEOcTkG'+43>P:_IY3uǎSySӸV?ፋOv'ZS>9?}ߞ˼[:cMR܌eM6=b}%k|.Ѱ˽4jВjÏ# wgA/,>e=H$aK }soͪLX_NP)AK`iQ{,rnȴy%v^=*Ϸ|eZu5gOMܽVP5H4U;RV)k1Žb,9yo1T`o58bIKXoA)"{XyU̹zDgt+{ k]d%ǐFg??S﷖+v[>۱Md%iD,qE@?vjOpEV(J+f _d\slj(fci()CBNԝ$PQkem1o,)X֦=Uq`+f[wbI](k=s|`y`M'~{AetsߠucJ\ufXjZUGxkʁBFl Xm+DKdnu2.SChwAfXuQ9>Sa.d셹W?/PkݳkTb,e(E c#֒N-{W%q/zPqO6Ahb'tG QzhitCp%A[0vScL:40q9ds:3M{EKpuk7ū{s S[\gH}3Ƨ@L]#K'r1!xg {b~4+>G 1l;ۃLF#z4}s= AXɩ;3V-8(؅o NUUJV5T b+Hk- U6`HIR*[GꞨQ7f%F{]MMS f_T(Hm Ѹ>71͜mzN iυGJIвӑ]'L'ңQ=oN1 ٪b}``W^|X?0YZޗqL)z^T}K?!zc|hhKTW-94~A^1F%g'BT dR3@ZR`DV]&_(jH!S 7;m5rGu#HoOt ף_޶UÖk[-~}tt1+^鏧gN۷usg;snom\tk뉺@]K7jH;SRI=)Uژ\(MU$}>/Gmu˟zj:˺evEO''F'Ŵ#2 dZr5N-nGgK3~nf^)o_}WcP㥿wgfng\aJz\k/^q,Ʒ MU*V99]WWrV/<׆QgmMN6u^[|uUEXaEoz}OJlG_z'^|D]{~6~Eӳ?}z+eOn<:s#xc pyyѥ^SoM|z !C ryd  ,y~PW=׫ F{pĮdXs>vo 9&PZa,;.Ћ\7; /F}{rӡ^ %Ey1s E ٦b WHETd%8/yr >7Wp[S]eTVWp+xKg6~ቁD)PX[Ld)? 0^;tյ-?^c]Yy'g녇}ϵ=ѮvX6UewۿDdkn}TFHڭtg3ڕ93m}|[[$&H++wǰLt~^ú۟g/3S1?򏫊u_,K^z N'BI) TA2J1Y-mmeǿmskEoeӷyYmy_/lTIԃ&nKMI;Q\lc&I|צk-1ْ 4g48QΙ \ڪΎIh4ܑL 6ȶCz2j-'!cÜqwVk1Rr\޽x:LRn.%&"Ym &wLʷ7܌)!&Z×2AG0cא06Tۻ0:&\Mkل1` k8Q_נN<2iCT1T"=6|.coƱu&cR: cy;LAԆFTyq}w1s x+FZ!b(u$\G6%W;ɔ4vC/[%' e%JΏ2r3tZFOf A 4vR[EG¬aBI \c1ZHpPRp 3bJA4z9oL.u / 5 A{JLpo %Ra,~2/ W<Ȳ/EzdH_u4 ?A+u RqF&;sʺ@J*@Ojph"4)Ɍo1($Ɨ !5P fdYgCDۣAJ9>TVc#]ZBi")!t_X{9~iԤQ1뜐8PU9hv'"!`BB% 6wg`E=x>]{Zt"ѻ^W HУMB$a1- .M〉X*%.]U2Ã($&N#j,$t_MK I k+Ǒb58tiXyq`{ċd A9b͠B9uu <o 16X8+j`4*%*Λ6 jRkeVNb; O6&;WnW,6V%yURIz#-Qb=b2"߁]R%g A/f9ІHT&%_1ۋC׀R"Q2=Y/m3ig>#xX{a̓b:`(ʀGj pq3j.Nlc:Ȍf&S(PSeUA@,7 cٞ]̎1 #O5!>vS٢RlEYWTe"dB3|Q8zAkO#pJn# プ N}O_Mfp=O%\ghI%nهp`~V;M}6U˥Σ0/̎YY lRI a;taIڮ'p9-`Ҁ56; zQ"{3!z c3 a^Vq4yu940kRXэ4La3zj ^ڊ6F,Հ[a=ϔf8RfhzD>x_pQk))4Fp(Zŏ^ VjIJ2:KJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%UY t@`>%U@/S 䍥@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ &uBJ 0ѓQ_Bku3F"=ne^ ˇ03<۳8XQ!|~\,~KoKz ʴa2 L 'AYtO,bq:*B*3WؚBZi2h\L7zCbqs]/OÄy՛2E$|\~Ŕ / V jsKlT*R CRBS>mnFjbl(Q[18 XJT *u&#s?) xeNF3L]hLXM6*1߀ !wwX_4/Q,5$&0I,Lba X$&0I,Lba X$&0I,Lba X$&0I,Lba X$&0I,Lba X$&0I,Lb#~Zz+~b)u^_փ{RdwK|ZiNI9c'TdiŎ!nH.G):nMoeryWnpel$7Ya`B޿1ׯztv*0 h_zL^4"98Dj!VWROq62d`a`´Sb`3WhW` T•6{}BpJ \F \;\)Wn ,$?Bs:Bk:v+zˁ+c+v5]a -_aMO[3۳a|?^܌dm#p~ج7gYS[*AJr#YU7.y}]BeVwvjOeŁ/:*]7~釿w߬~o:Tgx<Εǵ)j.ȫ9蜢I,bNS(, 5" ?ڶm4ۏ鯣#̐wAڸm}g0i\^4׋>|Wg ?%?YHSIm^PXYeQK!|)h àF04ƒlΕE >v?{oM[-Tu\z#N7!zjЇm8LxK8;_Q=pWr8}xvlmOo~=g\.64$z>%ڰGs;JԴa"ɿeάӬmrN9/]yg`K;sЗS& f{-z?wϝ^z;Y0G3ߏo>v?oi'ѷ촇jp]\n<`Ť$'w#5!RZ|OZ h, ;?py^C||*ܸs.9GZl\30YU ΎnOmN RA%ЂP{c{x\sVa+y@eI.%in ;s>)wJ[3jOf `vw?g} ͨ3ЛmɪWO{Qvgyٽ)]P.ҳmtUe|j[^J^Xӟ%ߚ?-k1@5Xzm @o˰7pq٣^[kۚ~6? GͿF70{llT6KXN,?2?)'? %+JPklP[ӆitFvJR=r͟R-x0ڥ~#񨯃GmH˫NVu^IJe=ʲ>t#]_HdKL뙘jmkHjyqF XմU+i1ˊkbŅ&bC,#g m˶/󠃶~uA')/Nʚar)ÜŵFkGH]ߔu?d4y|>  {p$6pbL _yqquc L_z s?\ԣhiɑ3#cOƎ~0Kڥ;jMa:n^VB1jJHڶEFU.0kZU)7ZzשּׁVYz㾂ETlt皧ad𬏳cP3xs,~?:a'LT2(+w};5!ϨE4i؟w=&u `.mb0.l(Jv^L EX{`c=iTdJ{Xc-hƝ3̨eT;]R1d)Rr9M/`ѽ;3vm賐}4ˁ}{tTL_9P}[xg],`ĥ)D5K6(QT%+cn;'"~W<>Pո]ZZMu ͧVl_Emre%ֿNcǷ&6^v3 k!wVE{Nl&f=#CW0]H0)h'x LVE2!( N & 5^&yv{""cg;ED8c**X.U"Sl`J1"vfmf>i^/.ʎqQ..~GƒŴ3+6RMT |r咰B5gX%TوU=Sagc_nFAGя(< t`bdd{Vwqc\;Rdc^f0,U>[xJMzw5v]S+55. nr*Q"m?Eܶ9?m.'n{/[ nEUk%8㑻WA èfEf Kf,w VJ\BK%LYLb.| c8diR:ܼ_qH]j5wU0#D4M6>O}J]k/V}ˎH}:m3_ڲcc2N NZJ6V2#z]Kb9Q L)'e&&=C)\ᔌ' .JEhoa‹q ,wܙ9;:K4=Ts\90UZyۚRX a>bqyz`gu4˝;[tIaQp0tQ(HyLR*00[XGL?,{"!25EXeM:aTJ',` #5^QGcۡ^aP>{9Oqc|Wb' .U/ s]Lon8p0)b> #3>[(κo7v _h^>F>,Nkwb1y#xӻ&\io( wOuCz'!MnB*\‡~Y.a.y^bybd"4yyߴ0aI_0DŽ?7; M;pp 8Ѝv~eHŸ_ I, Twu[H/ʆ{CIIFm `KLs$%f" /_]LC_5Va َ}KR62xI9g(;FZ(9]Oc@R#I KEݢW6E߿UY{juK%U}ux[20̉W2]$mD^:m(@y副wA}y&a) FTRC(AgJPZ;9H/! ȲGPt1+&z@yD2  e?-6xQѧlKiFiʖ> du2YF'2 Q(rQgb*)Y&%TP)FaYcP $'W[lD .2'7?~Yf۔\0&h0\}N+o ~XQl`իއ$2C% JLSa"P)YųB-~~Ŵ;i|QRCvtzǩ_~5g 2/5մ/F*ۛk%N9%oxIɼ㷃óksyx:}Lb@f7 '~qRN?Ɠ:M4&5bIȚe0zN)exSy\A.g!.Uq2\߽/cFf:"Hg3Vsr+Xo&3GI'V)8l>ձ% SnQF)U5Ny{~ݥή$e9vs3{tiܻ)aYWV{Cl|gY]q0eX ~Z0Jw5F5TF%8y?^Ub,~2>/{}z&sVWoݲXv&a>OUЕ+.Ϟnd:Kxmev3 ~f%Y^[,?EUz7)8p1:Y@mﻺd_aԢHMч rpDžޭ~9FXoI'> UG>?Xݿ]Ro0gdRgX/|2\X}ָ iWoQA)?'M-[p-'Dϖԃ8V #<-3D:7wң&YΫSk^oxOtnûMGR)Դ[E{)osbfK1q%;[n77/ٲnT#\:F {QgTB)^13ZJdQKGr"]*&W6d|"Q*U,tmyo;;U%B30ݠ7tBS* ~uew9^LލL.O{ zϜ`y&l0< (xTxu%ɷO8"poìtla6&hr>k9ǘK%LAKl-eRt4R+Mh BLA Id`%@ [:ﴭH*RURhgǢ|ȱ)}H@Pd+Qކ* `8j SIBfl%Dzk0Vh$d_P-kzAŬ,'͎O/DMzSWP׾%KnƵM4=*紊lR[z|zyRMŸY '&+XE3 WHܑ׫\ɳk\K5Keɉ]>&R _LV6Fdih $ ݶ=Л.^l; vzP-xUëTjnGw mج$^9f\*H)M)yb$ <}O>RfCYrB`NYM(1i e '%r6궲QlK?&ص~rmXGv3\584@8V+F)BH;p hg"A'>V9o[>ih+t8B\` RsU%͕BSWRyr=;e(t G&sN2b v7@4<}|Nʫw!`Q*BH%JXJYr0AhɠF L!#od[Fu~4@&H+V~F."V/܋}NO'좈u(r<`1mIȧ]~30{܇7cj %('_5bI,Yb)&,UOWUWBd`B$2L?F&ө 2{ D:pI3 aO @OFu }ggo'4o-_;[꫃^SA qEYQdiE `_)oZ `_%-0oR4=d{77=[n'aH/g<~䱃DI(f2鸒Q>uD=D9DBu*e>ŒΑr͍ST[3EL QzgRN;YqNɥdo=18- &hu#L1pVzU:yu&w&F|ȆoxVd+u6ɣc'BL6=>"I/枖۹AfM?/KaᑽФSfۦn^fTn'ih29 T\p3q!}P5/k42ԥvYhbٴOiNA")mO^~ܤjXfd SMCg,Ӳ5??Ic*'ܺ٤)\\BBVGI=GG Eq. ppw=sgy>L.;Mp{qnfffTSm6N>ߏQsS7lrÌڨY` ͵{ Ou˰U%ח+ƿEњmHQ߻_%2%i]gx?ۭO g:Kȏ'wy̗d8x{<~-Ta}5#_ں,UsqSmm_7WCNfa#cef8@|3Ahm`)PQ3sٺ 3oȼ+ؾMqE(*f`hOm0j9\U=oJFt #<鲸N [>~IyNc%UDgTImE>1Vu*:'hR"f=8+ >E"mp&v#əkC!r'dyɡC7 ab07szr@G\CiJE h[LOg Y .Yg\bQe2 5/ 6 8wZ1َNB.5{J95=8؀e&:O=̈]7J=%[۱{enAJCImnjƔU&M>bQrйn['Io 3 Bs:Xo D 2,3q^k ϣ)K!Ũ%#ϊR+Ɠ6gQ|J3k! %͑>rc8tjM'0MieRN((% dQH# MpmrIAg3߻% VS^eT:S#:{$^J7JgLͦ".tX'Htn:YW0 {}UFяXNHHׇQ5Vʑ(TGK/W+0t+mU2ž~ceXQoT_\qñ);L[Bj}Ywu;~ZZ_a/cW4OS#.r;^3]/\0 og7 E\07i" 2-JXy>a+ \ኤTpZ\NJܕઈƻ*URY53+X<C \IK:u")CL ?F$R b6ڮLO1ղUmz-w7_ٻua44R~[d jz S19]k9^[][,6x,^JLUv0vj=fm Ωlg}*ʓ)M3@8[)2=Պ3pöC41u L J)i)j[^<0Xi0d/8Ea( ޔTzpFt)+ U3eΆ-}2痆}>5]X[ɍ6>=ђ*1Yv=DW"`SCMȴ*, ېd6>K(br<@(\ME^kfBdžڙ8 5ݔ0P&#y3=9pW;7 4gST(Tp%9d b{F398W&ȩ"p uc:eL8AO &uR49)*PHt@kdL؝vaa˅- kt^Ӈ_>/ D&L?;vpLq0I\+M⃥)U䞨0@Bx0"#=ș@B+ۨ3va6U9̂6#vgFl?ŕTP38 ^ףvnx!h Ywrxg-<#IyK,)f H kRD҄5\*Y"Ǒ0|2@Auu۠t b*x*ؙ~1"B="x2>RHM 3^%8n(V"YK{3%Fh 04Rl$ YiQ#g|`RILIsiPq;F٣GmW u):;ӒCqQt_xoȒ1eQ1 b% @#FZ`{\<. v!vi?-žL];G27sR\_FBj-:~ܲp^Z#E-bÇɴBvYMsѣMQ<bD!$)au!A(I( <)[d%JmȥX.(ߪpΠK/W4OV~\;?+S1\\ *ods de2f٩P ,BZ4B .ɐWx$ޱw&n#^z]ÇIνouy>d[eGԋM'(ۖu=f*Ů1%{NXb_;b;WqS\[ky&u=y*v6ֳoI}PUI4ř%9^b]Yk ҴN,0챧|=}_- M:9HEI%^zH T:3!htǀeP'(.e1mI7OJ\k)r@mx l,`&t H !)!hT“8#W4J =qƺi@)Yc'$w L E8+4̋bLW7QDMnC2d! 2GƬ:EӋ,MZ6ka%΄ [C:(&6%I^Vr.#KPjRyY¢s6A r)r{,3 erp(,pg]uk z0jJ-P5ifp3*IǴIQGVpk|gMKԟPP>ifv_\uΘ^ʍb!}n[)M/ԗ3HJg\|E~mޫ,u˲Kr8#Z.rD0YZk~]=Jbd_nҴj;Z&лSJrRޅ+z2yVZgKJ!Kˏpj{Pihq*ܰ귫+nodG)LJ+,4|ZT 7d2"^ph<n\ܳ.\Ӥ;?|LiHC[Ϗ-5%9X.0KɃԁigKqgz~J Ntz\]/_cɌfE/F߻vhh㏋VUE-{5Wy_Grk(/[&4!~\SQxffaz[VxH=vIeKWImMC5,b8p/+.2<YwU_[Bk>jinضöGk6Z/:n* 꼡Ν7XHQqeR1e"X 5hymh Ƞw޾vpnz6gwm#I_!iwoa ػۛ nnm uc[>QlfUS,zEjLeEUտPO '%.}R$ d#+Bq^ ʱ^S\΃{b :l0x'0Ivx7zfRHHqBQR#)ڻD y1&4)׊P)I΀(^yʝFZ"Kd2@Mr!<$9,9.K冸v+^"cČxFk_?WͮgsYs\Z$k,60)KdgvX"u2nZIEnU,'ol6Yy{r^}I> S $"Wk40gw$=X%BX=!p@𤘶ZAvg־l;*o@1CZj-@YI {,: *MdقTU:&ݘgHLIP<՞:%' b(F<~5~gKb._}H>\ֆ/j/};Exnn<} &]]m 2Q"ZnaJH~% עEx ؖE:v5"n1Mcn4<}϶{7˱bx 9P}h'xoX W>-U]u]I"~uMOHЗ.MJwM&6OoojwOeϚKvAzۧl@m3̐*@w |O, ~x߯_ixr=M;eCK5SƧE;o]G'<w՝wT_q l4TvSߡ7g`21 ·3Й;(LqwWa7͞k>s'v6f;xf$I[&\ a]1/͂ML^"Wӎ&zw;e'+|&bf_9 bи-bǑ{CH&a-tsdB:lD bEď2f`@*V`KP<îBYd{LFÜP2M,5 핁*N¨(́!}UCoYg5Di/S^W 9N~^3rSҘBJTfvW8/qVyY t"XU3cx-w$^Oi+|l)ʥFf*A=YBDCWpoA3X뉈J:JV24K@:.%e͹uHƢhW#q")!d`$PO1ha35044+F~Hud­; w.3 %B=}7/#) }Ї&SƊCk.q|h8x=AsI5:F$[{|dh40ha80h CPjdYBEϵ˥ Cr6rq>*]X OZ;pa7У]+)q~gyw/gϸW2J`@rHB E&1yr ANhJpT&o<*h)bu g!>RL`&EGΝra.}LJW/F~ ވ9M+_c.-6EkT+FreIgYRi]Z]P<3 4/tfb6 ]$*^U&Z y'DU4EgTEijq RWz"gibGk,Xl*$-1\NhuCI|šojqWw'7 ՟nY( A~nf$T ^NY }Bn>Mf=Y!? ^3";ů=.]ZhMjoe71%A=6rt :.-sEift-`%5>1%h1N^ w-Hzv}`g1>K7_IkD&]$Bd֓1hn:+C(@hιW{:0#ZwG_$>lduSq>E/m~3}N9,27l;a.:xK\pO"M۹q+Knq/Uց^S7/oȝ-d[0~1(n7 );vY{yt]0wt­$kpܮ^v>WFy~\ώyTȇ5Y}P닯p8}ت:sShd6 hjyqb;aƍ`F+z.O[nVFKg"!&,8gM:Yw@5ѣ7}AߎhlmnYؗ/ W0/ǹ$W4@mGM)rBMmOY7^CJۜK6M)VP]$Le6q#ʧ= oʇl%uoͧ+W @c{k5DɤDQDrbݍI*!ZJ̤SpB7{gA*\?,7!y1ys GO ITl2<;ao%߃|ž,ۣVc&ө-A~EP0$F8T2;"_j DE >)E2P ֈ h1> Sڲ,h cp,)ΆD6o&ΎÅ7=_+X˙sh4!6?uzֻEyYe@[*ǂ{XLaMHGc$ 45]-mCpQ$!!dBсDW,9$ G%,Z4ҏpW9c9$c)229(kR\J󩀉2Rg1 qh|_6a%̱HZ-@4R\N J5u &J?!a3ٓp=mkx1E~2<ܺ w-fn*F4,=œF ʛv_~ rY :*t>⟲4d܇O=zlN?h~V"rb Ѻb-pOZN+"S3!hQ":947if8a!Q<}1(v/֍O{|>|яXyT$64rhBho$J*A9G i)km) x%.˷-M 7 n'鼂bdlC|D5zFu|sz /aUͨ[]rN 8>q|;g.t)?8'E]$ECo1pN$0g!,E?=(a*ΐ׌(]sUӫnZҼH >[F^\?\gZڗrz4mw_n20>޲|Zet=^g:SI"3 [4zH\,~TOR?b|'E:'D@0x@qk(p2਍t ڿ׀u+ʺYu tNN tOz% Y5lrԱSN@*cg2*4tI 5RR dz2ZOƠN6:bF[H*'\-Z(0yT SlVd]Їi 7PVͺJut+)XHk1@'҃gTt lz\N<zщ*%ء7XtB)%gڹ+@HlUsqWUZ}5rV [tW 9jIwfaLq+0+cz>֍S~wFyiʪ[YtD, {ؚN?#ˠD=,ůK#zQ=_}y5}8>;ߖ kDC%姿|.KI.転 UKe 'Wmy^'Š-wܓBr0D o}{|܋Tqʈ"Py@Qg&6ʧnKо6oeG%8KlH<-]) 'YpY" Z@)'6d2'2zz1d^U6jedօᛉsV\Gr\͸5.{ѭErx'(S"D!'I[d%^AQ裔TRy@2Xn3(v Z,M:UFX@.:Km!CANEEvɦ|~AVʳB"gWBT^l-KhRRE;3&fCc;k&Ύv՗`fNݤ|P(-1[ITr!!X D-$eJʠRK#u 2 `-MKO?[Wi[EȆ8iUtJ0C&O xKae&c j' q350K|" c4 B7e- @$B8th8fy~~v(ltdoGLɀF\61!"Ɣ)!Mv2MT4A+G -_JnEe;@Q9nT@3+B$v1z$G*\,B/^>p0_0gW fOW0 ZA< f7~w4:gWp$#gj4M 8/;)HRGȺ JIZ8}qܭ+tuqshPƫZj"t Q֒ɠ.xqN== W' U2,?s">z@AFT98ґ0vqpLOcHts?)RTVqro3c}<}8~H2,ƻ/MR<&CW$ekbIbp4 #HDJ@G"(FYG!#"!d65f'$4ڕLBDVx8t׳ yS3dUPq@DL:LR'  )2x&N޿VeXqybW@ؼcGeKrT.,6W%% >H8vV>c -p`6ZGvva;/pՀ6^՜KoZZd )&lhMAS#FI(!+[ M(uTvpww{h5NWx߾QyB]EfͶ }?BX%F_ٸDߎza5=O_ 7fBa~z‰.{psxtON?ŜdZ[Mƣfg|u%־!_K8?F`Qζvʏ/(PULƋr'#JY]aU=HEy;hv޻rOe.Xe<0c/_7׻q}Z6EG|AJDLEfmj{>KXgQڵ+qW5Whni s- 㳟)N =ƣUOxj~yOhg_ u)߇xz~%?y'gguxNzDNB]sY`-u'|[!k>v=)eb6Z>j*v5g~}nnv.ܚ4 1KߥВceza-ݒhӝXU1|{",C$Zko;7n6+8\zq(pZ]/o݌wy:ejf-j|>U|`;N哵 bbXwEfs{sijh4l= N @"i=qNKy%e3cBc" mg ^^&P[߀9lμXGYES d=x倂KDN4$ 48&Nj`*ﺃk{bjAPE qC>n>}-P^ON~?aomЫAYmѼa8UzօbC/$v [ȽEc,{$yf|}%ܲi X(IVUSF05ީܠ VQɞ9}mxX3G@$WFXԊhp6!k(Ѡ,({!3)aJZea.} bt&s#LPS^ʸp*,Ad,N(cb(CL2R%K!kr,rɕdK틥LU^/atGESrZaYxwMNNDnQb*4tE4LNP6[޽cL|Ѝ^5`Uh|4 E<<IJm>Il@onEx<__I/ /@A65>h@7{K . (}M%>۾O87^OfOH.n$0h(bPSmTzVonRnu-f N#(w?];Sz|ҞlŦTG3;gqG~:}8V~4?^ocvUd8~XbN1xKpٺv^ʭ&%_ Yz^ҽ6sh/etZgK6!ƶ{b^ٺm]qa ^V uy5//n{([|y ȇ>WxxxY]*@ū3qU{ytho+y>F6Sef!50hj B-\`7 !Q&; c䆙>v̛_G8Zm(:gm=JP\R3Z񱣧{AOm/|ˣ=6l-6˟&nIG ;z/A#79abZ˘B.V:0} dѻ6ְ#;7[^ʲ,mOaވX&NamQ:ϛHSz,BcqMT攏إ#wR/;؏vYlj =n4C4AbVmY-&c$]"rf$7ƃtu>A*+!gO C.GK#R%PD L#PѴCerXYUFrM6ҕ5V:g ObG޾|!, Ǖ4+hs!` Yg#@R8(o$A e@x9cZ{9R%6ZT0g[,p nJY,9QtafFU"kZI,&'" ȽK@$L {W,XCi n7߰a59*p9Ew`N&68CۿoQ{bdn6_ǿs)+2mH(/6hѫT+ ^OVO?=ī3dn@ : Azd6HU"F]*%*@DÙo\M3q]Z3͏{y}bw7 [{k?*ϿWT^Gr {ciZ\by>EH\FlutPLϮܖ5]_>/S)eYl5꣆ DӁ>Qj̎E76y6.r軛m~vo1<6ڒ>-cjI󆝽x; mXYdf^Jyv@it H!A*+@eN4\{^F+i`e'E)F_i=:Q\z eH)}t0榤0 =iHo_XzRؓ½pq#/qɶ'oD *U_4Yir;PP+@*.F'UnrzuRsz=^Oz2T0ZYZorr< dńhbt $gn=+JY]0Wfz&MInLm^Az4=|#ӕ\Cw9-yto5gckg}U8#o@`ʻ& %|M= P {Ha \YFBDeLG$t\AYXL4k%Fا~[T>㬛}+pefdm5{x C7zaA mg&0i3[l 454dRAd#k-,CMv\0յhCccKg゠Eh%Et 0H8׼hoZ'-T%dw&t<*WH1%bu#MJXc}*@ \ĴgQ z׈!ESQ'ӞkBRJzxyC(0[WlՆQU)mKxSm cɑcVQI1 mK'|ٵw޽jT޿f= @X2Q,)'LK¡ʁ@A8YȨ1fYxkkc\PIy1E3IQs-s)HՎՆ?2*ba58*uccIcᚆ - f(7yp6hp86L?xb*+sH/KIuh0RLjI9O QŲ t (^Ѐ(lJm!a.9̂Zj gB1YG'r}'ԥq8szs$XWĵאM%.R)BE ŚHJDި 8R7Ǭ" DA g"x4'Vb*zE&Ya҃g(Fd"_5":' р%胅&lĤC(n#bЇS[).Nz]uVc㢬EJ)%bڃf 8PLs"QH\OYDZ*#IN–efL.gDmņM,ǗLJz4)9$ s3$cڤܭ=vp7'](}mbChnv8AutJLofƲ48' &F,ߟ,> tOMq -OL6UN`Slz{lG=0rp?*͒}JV(4N6s٨t_g?YE7,) ɧο&[C&O$knlQڍR._ #~puSC24ٛ=DJڍ8;vve\3Lҍ>M4ݜk-~[ŎKSiL`Ʀ'Kęp]ܔ~+q"Qp}Hn2Ţﰝ*-ߋVMbObl57y_Grj@jhLr^䚊NW3駒 -X1!\dKqrO0TVӐE Z} V:;_or(b}4f.(\9 m+>r}zn{87~R'o2x̎cX/J ZR +OV5T`u@:pV֪2Az"zƖuIk ?#׻bov#/2vVB(3cޭcgrS4YȽɊYaǦMm|=^蠱SvjL0_غ*(! uSz'a/G9UsZBFn^T+q{{)KE5ƂE@" _骄 1> X8OGVQd1§ݡ #^ضϼ#Mwnr6<׫ ij91EGDGV ;\s;[.s;6H:HHW쀍R ʃ銀!<WI+g5IWUHWh?6BpXp]IW/FWgѫs^+=;³] 5b] D+5@W*jߢFztA;6"\ch]WDLutZmrV[*m_5Vuu~=rl6Z\_̢Z;uvQ]Yn ̀snNwT`isK3M^jvi3a.s7om7U֗} >㟾;B# @㣘gMK]t"eTF'wco? h$JR9c|wu'kc}>kSdLv:0sB@~wfj}iyd):{^.jݮy44 v\4M.Q iTf4*CVEWHƚQ4R%]MQWb3>B\3a}tE%]MPW^[RltEMtE6(}+礰:,%p5W D &v]e&+䃫Gp+FQt5A]`fp+: Qtbt{>^;D!R؋wغDkN1] mJ'][6;]V(;k+Zљ%w0nCEW:2o w?컳֙zڄUꢼ_>{]1,7{<]U[T9 1o&F!iiKx+LS7j MYa5ʨUT*qJ-Y]Xpֺ 6 ydJ7Ua%ۉSɞdkC*gdY֊UYp{:(-g:ŷͤ+;}%]A3$>xǨ'Ǧ'GNr!<el0ܳtl5B"`gp+Į+4N>E]A&FB`7,aruE$]MPWV8uX?äcp+bQt5A]9|']p6rf+uGWH)E0E]y)4]ltg +쨑t5]pU $ EWDkt"J:/GWг\H' Vz0\0Z7Rt52D $][Rp#]ltEh]WHiL^ F"ѷD+:v]I4-ˈ<)MLsONbD `]\tE>QB%]MPWFkpt 2zEWHkO=t 2j: rh>A5l+.v]e$u@GWu"v]IWԕP;+ep+5ѿ'J IWU=  gp-7D3F+ӳ͹0B{5=> w7hGW(Cdϙ2IW4)HWf+U|"Jk/%/WWvγ X %-jg#Ԡ?elPCz_./nWE^=_z{|gqGY~seuv_/ %~>Gd~JG}U|lذx+\/>gecwuBXY`_;mP;n[6 pY]n c?7=lQPo^?$m;T1*Q-3{%6=#'4+SD]aKKkzDX}CfWpu'§}.Tr&`u L#苚t)kZy?vm*+rZj;?QYkp]? GM.~M*_w=x;}CcՎWGnGkCn>_q!c>jv!BBÝZpZS5?/-^CgG_*N0p+|d,6X'5j޶)}m ev)^Jʞ,l7?Swi?zrowA6c%l0]6gJscºG oõ˯WocRɞdm Wv@!+aL*cݖr.y4I+dL&\\IO&J/4 NV^Αa"ia {K'E "FF`iM>HH]DU5 Vg ltEAsҞ|)"J퓮&+Ќt $].."Sw2SԕuZ Ay6"\\tEc9u;y=>"\ڱvFiRt5E]y/6ѺuE>$]MPWҞ8jG\-DWD IW/FWgsbl] #F;|a>+;@W6jߢAi+ A]\tEŮ+t*jRZ*qdW. l\B6rTuּ[B3" :9dI./Ϊ:Beh yN].Zdpnt`VZh=n S0Ab+76"ZcQvXJճm Advҳ>;ŃtEVŮ+)pp "|:+VuE%;IWѕHWtEWDk"Jt5E]y'>TWEW+ٌ]-uE%]MPWAZ<#]>Np +GWHEz9r=ޝ F `mƞ9 Y_ݡac50dd ]}^*me+V:EWDQt5A])eFB`g]g+RƮ+6j"hyRC#0\Γr=WЬՙVD~؜6yʲ9OjP!J<FtoFOf-At;# [kagԓC`' J{rDۉ',=9d+F2x"ܠ ir(;ov+f! u׉EWD;a6֛,H#4#]pltv9h]WHH#]!pp|tEXB J&+AHHW$\o i>"JOQW(a9 #Wj'ZtiѕY\#>F_7 ׎FF t'][4n#]!WZ]WD}uw@9M iw &.3j?X~P0\#D2~NVtΓg {E|`i EH륎]De5 N+V2 //;&hp(Ms(>`ՁF &J:Sf8z++z5t(Ҋ ycYEWatE|^(m NQW-=װ W )HCSUXM@`m5]n`3iAE$Jtrtz}8i=ja~l] #wтi50JP{to+!FB`-]V\tEƮ+tt5E]iii2(]λ cTȎy5R&Z49&XWͅg|Rпue}/KLmZ_S{լG{wB՗Wu5?uI[~E//ϋ>͇B\ѿe~ݮK, ZGW_I^PXW~O]Gjkw[xW|HobúݗwmśZ;{OVܳujfGw%~~4o?ƎVuN}*_(PE>({ʯgsgA!Wk{n5/|0@(ć-&k6?%uV_/ʛ# ~>W΋P |\ke|SKuYB"բkۍHmD iXe !-ih6/cMI"YX$6OfF8'Kun:[¯b W.Ow|wڋEo_BtzkYW%qTIYB/Ng,%\Λ;dJ.c4PjM!g\ȥ4S5M\';:[;cHտ쇅4QC5Jstj! X&Ut(Qom΄1h)EL$${j%{(*#i Rڎ(9ٜ\?]mBu@۷oߟCRVa8 pJ@uM+5RœDhY{n+bKwa׈fhO[(:TӨ%gC _{/DF0S"s^uLu hQJAnx߁d,4D2R 0_|v 4fèiLU7*)$(G@B+VN$M߽;˵sU 1Wmh^u*L%;KT}7D> R9 A:^yr}4xNuuhaST{Hdt=—z]a:%&z7'ߓֈscO1Ǒ֑ ~F_o3dT&΄|`5 j}ITp,ֺ[7.m|i"%h,.FR-+wr26z=򰡥fnYB Q|n` :`ўJb=w|֑]Zy;a:ir@,-MV΁<`)`fR|WZ}^qDg() %JJo*|_A2hE ǖژ[Ck+.t-a6tGV1- YZec`T" Y$ec#6JQsAEF@n7((ʡ7fQ"R}W:a j*6 FDg7d2eA Av%4/r3ki@e 6o]0K@ JAU.1a RLA΂z`&S1RPkJ u(R!J@T`.]6 d&/sAGM-V│D]`#)#͂EUPhϒ (Ez@?Pi_ ()X|A*AYr"EUDI)bugޯ= 2+=[ Btʚ%fdYk|B H\4l=x_Q>VqC hA}p aP/xr R5xlPBr|\1fPTԃ,0F!NHe9;_3T_ŴߝRx] ިa}Qu`mFL`-$ >:gAuPy=5 *}tl*MT$];-6*d`fPD>#'y.E. Ao%CJ$JET2|Vr1;3]Kc,zOG3}dH֨Vx$ ;t.1t9¢JrC#QhzBb9¶l^ A; >$//O7 ?v}2"d*KqKе2"1wP:| 9(P"QPG݅ZR@aF 5ܘ j3`E+5Qv5+ڱ!," !PVvܢe$+oVQȡ ]%/@Gmlc:LˤUUPZ|MC@;3 5\oF,*a1=TV( Aw8+B A1/'jjZ e"MπռBhϢ;;kFFP[TJЎo5{%v̂Lh! IHu |4|Vs`o ֛.26  nzzy:`|6o^n ^Ok x@^,4zV6 `['\;Z,Fvݚ{9&mF(γF9@[Fm1PN2z~7?.2#lRӑpPaRDHZN!?78w(]tR"TP=`C(uAAJ@25di3 A)>֡[6q1옍}u`j+AZ964NB0C_t7(oV1Z8R.|.*jJ':Ti^?r(,@*cKULmqLG[Pc@sກ6[[1syw jETAZQkԦ=t&1jGb+Z{Ok[r=ˏK Aц?jo9hE}C DσU!-:ZSDàe {̀|12 =pe6|_HhJ7a#U2'EkO^ (T .nI2[1pQi46e/r\UKcb`XKEv,*f-$+ބCl.Xv餀k5c4&TsE}#3 R DŽV/-j+~oȡ-I_'RM_0L԰+[>8}χ0xuN-0Qov*` eEH^/gޛKl~<W6Z_@lk>;vfsդsa:m|wf 0 |ۍ|klJww'/^h+' _?x[;.J˻[8O6J'w_o{!pFǾ-6=j $&ЕZO:}B0|rޚ@sMYmT; 䴍^@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $Nu`⚞֭@h5N ?n'PR'1:H@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 tN ԑ޺=p]h~N tN !N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@': A+r1W\8m0O (6hQZ#]%+Z hINWRBWDz~stuOAA{<]m(~J Z~C^;Z]1WCWY ]1ZQY:2Vx;wgG 9:n&7s*ݳoߟ!^\s~wrju>W4?X[o7x^y~=\9}?n^w&!ϼ{7i>'L9<-ùȸܚ'Grɟ ׻OFMeCA?(;zөnP=|g~U FgOŸ˟bzkCoapqgFGH=O|B5~XVDLZkiFKOUci5-"pi5tpmOe BWGHWvEtvEzy1:]ݍBWCW++>~ QW<]1;UQiZ]1`bjEO7wpX]nTYjg$Yj?FJb^O1pj6p0ZzkWRW~GNo>!kߘ7}Bk7p?7+-tСI&CWu\ ]1Z:]1JBWGHWj^rm[hrwZo uoKSU:4.ޝgiW[ xwӇLGc~1zq*bS)b۳~뗦+DKӫƿzLM$F?1M! 5YkD9)6&~ =ÒGemgdbWbjɻ;YKΆ/j]sPƋZ6f9ȔןIx{}_[.V<6_ &D *m hT[ k@8:"n#g[e HLHʢԞv-Z)DS1Z/-Z oߘ{zd liPr1C+$aŁm?N dfb{skO~:[TZf K7 Н3Wm9|fxm{^~qߋkqoYV Mk%k<ڗnLWRSF*vW T~͂c gJ@)-6G ! 7~IQ~{G[{9I0FW =Q:mDdu$PR k=%6jYJԝ:cRZ3"+myⴊ' .IIotE;3M{.Q S&ijwu^[TbCWz4{Fu.I}?l-idQMH*1L (E5-[EҘ]E;H9e[Gq9cw-) U(p<`F̆Ve*-^'P|uߏh#r0TB(ۻ0b=Z]֎#zzx3EXGZap3+-5rIKBSb=ײ\!}U4'z>eQ]@9g8_jv7>n#H h0O׃\\vv/ܘ|K 15emzk23I]8v*'ٵ8Xo0UݞlhOfQ $ vMn_k4La24 hٵyК=MxYZ22m<6gZz;1K.VXSexp3XףR/6xď^kfٚ8nf4~Tx~B*z$\:equ~Qecf~#k5h\kgl_eotOR0A ^ic(D9bPc,KjPbԆ*BNiZgWHzc:!V1eP% "D!CVr dQ&)| 6wl7Pg<ݟͅ\][(dJ<'Jޤ{]v7Ib#z|0z|2 T4i4)Dʥg)Weg)7rӿ L% LVIX"jR[>0H :v (>Nʈɇ= d"焁hdv&P1댜Aʽ鶧If.4 5s0DAwF u59IJ$P)= ,ZDDP-k_Ef1 N1o8.гg;P3` \ Ƥ>$~ '-!/)b2 T-W~E*JK0 KWh$տ"k[ HtVDV@21Iz$cSJ4=e}; @[(`)F@f˗KTq]1rEy\e!Ks4\h"j6ȶ>8Yg?Y'LHS$g!^P(/g"#3uQ<+>՛. _nJ@\b3!m7uߧ4kȾvMirU:h 5m_kOБ^1pBITۯ׽C@2)c@?Z:t)`tQmhΪO%o)y5}nC6i3/+~_|?y{Ȭ盭}Mw?A7zW{Ge*4eZ5RRe*hNeò`Za/}u3M lift~"^ m-oH#O }OvvAgN*R!8@!G Zdӥ K鄋Mdjxv0'qAFɗp65tsq2 VEU IrOZ^M0^VAveSE4Tg/d)T b&ug[ط˺cITDM]Ǝqp1;^hx'}B{8Q1iHh CR2Ò|/ &u57bjR!Eg02^tRdYdT(8R'Bdة&B3rׇS_dMbF?ՈбFFn3r׈_>zq,OθX:֋Oz>ŀGj 2 '>PYGQZQ&l'z7;cXYT:+lXb孍 +ZURV#$K fQ"wEPXzIKMaV CZң YI=Y`.vn;F֝s`"4\O'(QӰ9_Gؿgگmc3=b^=z /3K@(B.Jt|> %$0NXQ(d YZZNz1z% ȾHhI4$;y\רkYoZ* l_V݃f5{ע`=:-'|W-#\BF1fACAƆ69*g=Ak w5dZkSk= . 񿜃=/F,:!ɐ3{U.7BbShkB˾+ƺ"^J+I{`gtdlhN錜6S-eI-e/”#ި=  ck*jTZV BJBrd( zMNՄ FJmeQߝ$3]E<$rsו3m0XI,<9C, 96cARP4כdSN,JLtݸmkK饣]jׅb8"#)wyuF΁rֿV0*KMr.TĄl@BXmIȧݽa/Yn]ݱ.ex߯^H,l*f$"ͪjKtL pD:aNkL |+8ҝ c$.*|P$Rw4+q)!D" ;PbT+/?i𓞱-SJ(NG&l$$$:ԣƴ $B .pAa(V_L'n)&>dKaq>P  CUzoQ 'N5%VnGe:ݛTowy(`IsXhQrY{^EO%0"(g+2֫J\4XCM&/]N$!u;^|<&-ȡ(ˀ!䅂MUKHygg\}e/LS $(&C. mj)$<!NT;FEטEט~CD +ɕ!h,u3ǝP1!{CRs\'X.yLjrBFJVϸg='$TF.ADŽҙƥ9RGWeY8$|GܼͮC=(u.ԻfѢi |aX+}F\ h:9>Kk2)Kd ɑ=i}t{ &qv@{?I:DYfZ fHHI%#N ai|vQc?l>.G~c8/'Kn΀aKkVk40gw$=X%BX=!p@𤘶ZAv@ B待_zcQR 9^@E 7̢CA:!D4Bc~@GHod‰| 3vJ=έs\](mrԤ'AHI( sg\>O0@N[CQfIA2@,+ $`q))U 2ͷK6xq; hqD : ʸ0>7;eTFB@-֫ɸʰLMl| n/kASiSy3rR.>Vd,oqĊG?jUyL?A!qCX5V5-Dpxw1adx5}+h7.q1}FTEˣ﫟ןm=Zk7#cm8/.m,.LX9S N.GvX%s&ōG =xhrwEw_)*jgkt7}rlhy}nb5|2s2レV5|!WꊓwٴBgϡ#&AUUycNʳgAnЮ')U4Uw!kƯ#?v!4sM&G'my _yݔE' k4ʼnE\=u.f5ώ;twTltӡv?xb$ԍzXa:Ì;c^2E2_=ZS%o/:?k_"Ǖ$c!fл}łs1ɫinI؊簺q[z_vŬ'hA-V0~g~=7 }e^e_{pmXNgعqyVu+GOk]pܽZ^ƾ_#FB=` k!}2P#3m>šgw1NZmKW>JWѳ$ $'$kY%Z{4iEN@$#lP1iSԛh~ZS `vf1浔N I19v)hB N%;}!SA˳VjHN`E _Ku{qי'ź=#M˿~DZK(ZEo HY^cD!UeXPPiETk/pp_IƎc8q{cs%\FIqȻhBAnXJ~sVO ATJF> cr+ITBMDsHor!y휉B<!6Y&.Zc |2>:WNOƒ D-i0<$s dO4?BH\yBJɢHc9bI-M7n_")帒}]mQh^nrN RR;v[]ỻiCxȫ;=,Nv~;7/Hi> MY{ڡCQ˱(ՀNjf #γ$׫@WF>W#۫ՠm)ʥFe*A#t!̀y uX뉈 9RF%2ܺ$Iu&‰@=Ƙ2441*&> !"e-\K'A[GD]n_BGYvB;}ef&+@*)juH!rahiK7k1Te5c'2qm?g]RHnby…/ o;j%2[m)l)h)E+Ŀvӑ}kQrLSf{zx!߹" 7[;}Wk_+ߑTgG@ghg׵=tmׂ&w<9~ hhv9|0kJدZAց0S`#{B鶩iev]_ xCzQl{_e<^.l;g6iMWg^.pw[mrvoy'<aɔkX_|w;"Oqx$xTsyˡ&;wH;qa'EmZ܇v^f-(jϝMm,fR[=K(%ۨW?V=513,0gWY\PJĥ^!\JgW2bK59BiJ)•Eϊ3+Xz6pe\*KkzW: pup|-F 5eQB}'%+b\W\e ;Zu.pRwRr1++Ƅ \ \eq>*wRj3k+NZ\Gs>qc_W!)Q1ʼnZDsrTeUFa>Dz~V^|N)79`!Av,VX%6$=$/Xr$6ATdi%YCۻQccM6XQStO|N~~}wig&N(| _w/$s ͠CڥX4ͤǮ C~DDv.d>tڭ[7yl^3h;Z4H<`N?pryq}s`unO!9k"cc846XCͤ Jഈ NIiM61dN97+eic9W1JjJZIHgi+2Z]WF)!/QW975(dWkA+2Dv]%]-QWO ʀsl疗RhEWN)3ZnHW :n;ٕj]WBV]-PWJ;Lקҕ؊jwJ]U!<YWp~#,Xb8+\uЪ2iC2`AhFW]9ue iu17qzwqu}׏z\_?^<\;Yex렱Q?|^'30#1x7O]ԇD^$4jqG6b_]πǗ67UM=%ݼ~Wݛk~4w!S)lPQhM eHUu٧\w$،@ӆ.ujJ)vyG)iPƀ{Kt!%>cRmщ 4 O J(Nš.0e`I ʀfiEWF׮+jJACC2`hrhEWNe `EW > y$O+!+=F("ucnʀ5+%hEWNՏ]9͂UWѕli݀!P3rܘ[ѕrA3D]i`KC̄qΠke,JCXu*X4YQYG Σ#]ͣm@UQ8IC2ЌWB+2Z v]9e̫v ־#ɬ7ѬrŤUoza- 麹H̴̪r[[ͪdY¹Vr;@iq"DjGNp#Vz4NP{)eh4h* & Y$C$wFd`,M;njfiu[o%Y5@Mוgft)+oXu@]%!]%;nVtzuejbZq1;XuSteɮ :mٕQb\uD]IBL-5` nf:NKtJ^uD]ib:2QhFWˡh%UgPN~tV=k=8{},#:%!OrcX1f4(hij״S ^)Vn(4c+rZu唜V]-PWImHWѕԊ[~x]9ΚUW EhIWdW+؊CV]-QWY[Eϰte ETؕS:vD]B斲+n3ؕ!>2uʑZҕ#p3r\jff]WNqw|`Ӡ*,\ t\]ͣ#GYMʁ4+ԊV ׮+D\uD]!PfU3QK ;3*+4h[mץA '55[ ԜJA֭OjF㣾V4@&S`I! 4s 8sh'Ǟ45VykOn=9 UҕfteYlrc=B|%UW U@ ʀ3,Iy*]9xVϮRBXu@]1HĖ+Fftm=e]WN)m%*ǘDҕR32 Ҋ :e^%JАdfthוScWԕZJRCRonFW+]mSUWЕXr0|,`c]#OdGGZA3R2] ]ɪV=mIW G }r+2ڈP2UW ԕ/4ٙ&Z-rm/A}]-3b`tbq~:pQƐsF9%@:9`o]^rxRܐ 4iǍtsXhͣmUϢi"RV p46VU:%Z5Adn)rܨiAduy%JC2` ]9Gv]9e^얨+Ό"ǕfƮVԮ+Lj$teR3rؕRrJ^%J3%ƆteL]9.C+rZ )uͮ]U! wgg8|yt# ϣMW:CWU:gftZѕ@)㪫%*UʥxF\MsSں.];kTk]hNBYFl[w̪Yܹ3lC+͐dPƀB3=Mh6k=TY{4K(rzpEș3J /7jz鱚6` M;nlfi)֮iLi5MRyG|GgN8u4'teݟ܌ 7+ueigŪ*Ĝҕt\Vt崚kוQ&XuD]I5Y*_`ak>/Ugݯzq_އ;6jO^ەru'ҜkjBzx*aaY92 v]H)~is8&E9>[ʯTGTo[7o|gR~E=/\aE~Nưz]bNI& bL2CI0Sb =PG"?)|3N՛+kO/uvnX1..7o _?R8CPxPeqSqȂLw JHH4Opܩ} ]ؕ@F vBCm1ŔK 0Nh@Sfud[W~%_> 4!@y`<*萳K>Sߕ8G |%~3rl [ٴMqDɪ ?_A,U]U 0M^\(Scq Y٢DL#I\<^?/id)=>tZ;,]I,]Goh>ɹ\Ff=ܛ{r8/sa/0Xm Ul#vOT҇BQ,v Ȩ/Rfh~s]XB>}j] f5kRAcd=u,byc ![0hh/]OO.(\z;x hDaʄy]]fe ,ugLњhch/#X|hdaFJZF{KJ"V2S>%,ږ` ^H`= Ѳ(-r A,ײT,@r;q!)Ob(!YgO}d,hC'q(vmMn]۔ImZaL[]4솮y4/([ k/lHYp ̮[KdP2(*yTX TbkRTO ;y2o~J-\E* (PodUl[7ankf# )aMV#uXS?(0agfβL2`)$J9b^X@Ri3zuL.vX> dmjYKƾHd2-3numSu )ah2 ȴ,:uDO41Qc衳v^bg֮ٻ޶$W`mb13͇ 0ɗ  Ӣ@R{=ERUxٷnwS}t@A9 qN0t\)bD pȤ`gSO@ac-r`FȤcD8JVZ!hȽM i*3[!(HA'g/Ex@IWm(HHv]P jDu"F_R]zFӘyիf@!ѿ&伖 i՘&DY\d`(oa5]A֊XZ I །wAYں^KQEn"磌1͋BIJ$BmfUs?v_ ev\@˚C?^;Vе3[:t$MG2,dM>n8@GVZLAG=(]I4T*#h`!&S`acGeF5)t_8g4A"$i Yk2a(1h!y H}辬(kIv u<o 76cQ,TGW!?/A_żpۆlbb$S重U.}t ?o%ͻ8YHև\d*c+xm a,{إ)G{9H"P _Cޅ\c`0Bk@)@a*2*{XRr(a3m>%dD;뒄 Xt1!;ZT([|`5DBrF,F,pZ{69~3  EȚ\\EQpQAd&#EP] =(Drw7qVUp*y_mَZSwe0!CFDkf$oywZ/CU=d ׀GR68١ݯ"߂[Z]-޾E+W]k}9`jm4 l뷹 RܻaWpoy~w;YƄ(|ņm1N;l—4z>3xΫSaZFo~zj溍ɏ -^.ggRWyDnrFmyj)jo][L'רּZ]s/ٶq~hɻ4z~i)}'u3e~\s#L<Â`_=Xœ\pP+^ c\ 7BW,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\}]2XsJx:酓U'#WTP "~ԒW,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\k "mV9 W8›\\éh7s.+\Lj\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł*l\(OGpE` G/W/Rpl" Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,bը !wJlWLvSU m_ð-|Vyä=Akc7a3> Jw20X  ߋ>L D$qAkfYF,p0+GWR62\=f5'W0)ኬt•RhܵXd.j_.ꢾBGm9_QqY*{{5 ]/3E;-ˡ&@]gO`=nցrwpEk,L//6~ß]N.Pci6UzLETX,b8h8I憓VIw[Cg4`fu8ɍy"iϞOֽ^pEV:pq+}:pE暓y@EOVFQ*D\tވ4< گpEVlW{O\NZ8~?p%9һ z3W?oZkiaV#{3(WCKO+#O\'NGc+:ZHWZ*㿲ZP׋r5-7R7W`b: ?s=GN?l;)JʒUS$EC %lT继8ݣh׶רʓ.}fynWksvZOC_u?JٰGZNZ"aMyfUZ>Χ0XfAuG%iw5Xa~~vBFkM>cPwt${.JB b7 '7=3CNA-" CoE[natH_ D_l;䷍泫mDō[_Vw|V_/wC2 --}]`܂n]Zͦy*Ͼ]Clu`|۪!Q֢uTYwebtЩ6m^pL3؍2y!BfM.lU[[d42up*Z2MP|)ny6Fhsտ~l_osT|-}i^[BC Eޒr2Lt,|B%+O1ߋo8C0&)SYTeKi;xTXl hTf'qՏwȨ^I^I^IkgsL/oζf+N" Ȼd%,p^N2wu51QR8e[]]>6/&x;H};h1k3Y{Ϸ}n|0]&2UwT]ɥQ%t- $'9 UKO<[mW L\&={]u 148B5;Yj6Id*܅y*?HO;26凣 McycgyvzZ,O/.֗ KvL5؛hM=eG^KwUh?ٌΰGB_V?R]R>TC6Z}gCI:,G{|36>`l J. ޓCY{ˢ79dei^7޽:A+w`WYηR{7t9pU[.o4ZT?ꮵ?MTk~#&;?;@P?&0kaL|5ZWZ H=J hL,/pRyJ2W?+5FZ] +,s}2WD;keaq!CbeGX[%HIEփD9$cQִ-|xk qɻsz=X#v6OzЅl^ӿnumM`ڛ) U,"{[ VT GH8%ɊQRQYWǝMZ uQTѩdTKA7pE`#fk)k/ZTAVVlFq}x73$gu5);&'|u7*7ֺݻwmFLS۱*W.nVr%[.$%);T5fEsH%JWY8 t4sygL"ރUBq*0Jf𤘶ZAv'y,#0|+j2[OR>Pk-"FBhf]f{tBP1ZE "q!Ԃ֜ɦ:y=E mKNPXaPL]<lxNO69+4G$.N^mZc;dm vq6At#w1E6DjFHI( sg\>2g:[\`zzM`y^Q)*j$A  ΀ Lڀ@kN4X"'uy_' m[A&jЙrEPƅ!ƨ2#>d66"zptim`}Pe -ztPQ8d0O{ҞR #k}<(.^:x)!%fЩ*1O$oc (LgXJ`oƏڌV=ٌF܌ ۿi mvl;JԖ|-i۔f}@}i|_4V q[ h"CoD{vm#?u"WWnz(-elNlxt!ix1z[Vr-ռ+mW~<}Ǽ'9tf[s_eNʦZ~,GC$k 8WUDV=dwd8xu"]|ŶcGV?~rm/nGzrf9ɥIƉi+Wkm(|Y;㻜+yo9MnآH7aoh WeO]u]I~aIV|)HЗ.`ɒ7XM|1ίg fXݫ_8!.V;geCϛ|ԙ8:@ w:,ᛲ?;|4Jy)kƯ3?e;.G'3 y 7ԛ-Nݔ9;~l%j7˴g1so's)SowHԹk_M{yt9gO5Gw O,R%@OuU a8Ly+)c3o.]FKas{5> Q"b1Ddt=9m4,9[A7DI΁mC0фȬ)De#b\+%~ ,LBas>1+ao ҂h BЖ[f88&H\8 +b >+bcH>ML%Ԥ9O .<{[uf7Gņ9*' DOCT3<i >1(%qWFZ[j |%Ѕa7ٜמgVsS;R{I9TUy뒒R}RJR(soBIQ 1P#xA༷THbL:QJr9ȶ8%x`ICXhpbH^h$kmt xM$'JD-_R&#?d_Vy=I'‘Lh;có8M |&n9KIjF$zIY~޲`W_ )?Gl~F0\N[GfPA>&`eW?7=O1ӽ3NtesM}4ok~2kAH2[1ZJ]wd![J6}ǣ8 m yB ͛~G8RIct:6tku+Hnes{QuKs YjܚڽCroMց^S.ݻ&!Nڜ,]}[?ѠMF?^b]uۺaKՊ޾]ʷK.,rxM;<y4ȇߚ=n'e'o=n\>5c.\O_.O:mq!D6-͒fhb&RțъDуEG̡+_'&z"R#-S' KtoTm$=C /Dž$Ϛx哓? ]ˣp7y5ʬSgԖrnfʕ`\(Gu.&28$<2bV:.R -3g>N*uYV!..@;0O(N¸pׅ_V:lShZg` k!}2Pm>~\q(N-^%KYJ*zܘtT2&Ӳd=IHIH4"' ]H(Kp53"9}K-}VI7)&_d4!cI1xJ1yJ͕ɩW6y1qn_џq]8^}{p\EkT݃f%qzZZֈEo= 8f6[}\g1xE2,(c CCy=M`rS'l})o~[/I  QsjɦS-ȃ o$sd4щfĩ@-I*Z`Vs p2;i`)XEx&βqNur豛Q-9Vyc7VWH\I#O?äe.7>LJ~;f1Eh$Sd.MyԥgwG?mf"~#p>&ǶihK߶ l;y--7m۰$FZ=wMj/;b4Y`G0MKɴÏ?Tv3k,Oj#~r9j#vy>sOYJ]/d7xb<0kሯ& zv<$#+#;#-(#U aH0ok=QK$jtĢ~2ܺ$Iu&‰@=Ƙ34ᩴWL~ܦ:'uG]/NUg_ ܻ+^ְaz. Q,.W}G2J`@rHB E&1yr ANhjpT&o<h)bu g!>RL`&EGfL*U b8qfN}9cVmRZ}~Y>g)QdAu v(26._J%"z16"TP[f09-NCT|,mR@SPb~ 7i=}jy$P`,PGQ 1!Bꩆ{S휸[mV>߶ L<0d AOe4b?f몺{6vrcyշ&'ϓ-Ymd6(9˧3<.\sVj /jOh~mLs* A*Ɉ OP//;XV҅{%MZ?\[ι.A`B %^:Vg=_XU5(k@ dzg'i=RXQiUS>ΧclVR1a3eUD\%Q3?%.S4@!aH}8%d$ŵ (3T3#IƕV0NN!9Hr,Z>ES}|C^-Q+t^Mvf kvúE_b>4xq_&h뱛Z Z ux1[)8 ujᅩMR:&裗<qϧ&>J4o[IA2n %L Mf"CQj9!Eޮ(yq߮eؘ\vZ]qzD"NY-SMNsxxNC6 %W$2RPfu(:CMlf73!Y%sTuatmxw} e]w1uBY[cWugrB*8f!( qU{um:O$-9:rBw,D9!2B:Mh"hϤZ"R#LZ[*Mb)^ o P!q=.;xZs+L Q0R!)( HXa !h y>Nx󵱎_i{Ok9-:aVi-y B]or s?,d @.Y0S,KHRmKFbW;TTW1ukD%D( U$ E帜uY3!D۫Րc A6DS yE5FۈډJʹwbs5R&ULf!69'H+VfJ҃*ͳK9(1Oq'c$#H더& NI\28Ѐ1eVUHcܰGyY-/,҇,~_rl1bYNk<9> M؄M߾>|],ksӥßO_P|mU K!gy|+>\cgk`J)$w|{jjs$^q/>Ir,b_nm&X4:7gqY-?dK-KbHM EHGtW3θ?jiNןZ"-o ]}"*],?V?pZy_^nJ7_ G|D}{v wB[T T:eO?,WtpPN}yE Q瞈BMºr\Z8?Ӆ2S:M1L7s\HnFS.ݥЧ=҆׭>ӭ-˟4l/[c!}Ce4W쮭' 7۽~vY7{:AhQ+5$-XSym.$-`v^ztt)'޼4*넬Zр! 0CAl3  Yy#HVe/qJ>q˲x1~>b{cC?Fgԛ[s>N HP l~A,שVZ)pXv/hoʙIyw5zxJRQZS%NeXrә`W(LEIj1Y%D5FG 8Q!( )Aqkds r02sA\s@hdv&,acɹz 8?=I'nH2b9JM_DwarޥZc(HmV^uA2bc+lEeQђ8dM*8EZC:Tn1!q=ڹJ~ '!T/Kd!z3B%U>z^PӜcdIțI ]e'¿E2j$:+"̆⭞؄eIhzoŎ^qJ3x5Fe)Z:bqvtkS!sǖ/I\t'qbMiT^ N,r%B<7r/2Qx;sgM J1ܧ47H?{GM{X>C[CQ[{ow#H+kZBirS~9ɡI8$S 9C*$|M]FƤeN40ְ]2)ж$xW2)c@>\hN FWGwV<-mɷt, ٓ=sΡC&}T:l.۶;]F>|<B^Kg'aK.'] ;2)0QڈkLTX 3 h8 h8:Qy[=Hr\Iёg"yz(P@QVb10jlЖJAS.:wSуd9d GHl*xF#GEn&!/):nF:bq QdEԭ nԩ1'J]7+ >S O iXbrNF¥cɹ>vKni"XSپ99ZrW3 ZZ+e5|DluE$9q7ُ PSlM3ELY71E)S/b*aSKG k-ElWTNW&匫=ĕCK%DU\Zw*q҃kɻ"!f+V7+R̸ap%8@(i@W[1;E#jJ++9Wrc5+l3"NVpj:XW{+麁|?e8G)q]z,ЅZ?ul:\=ý*묫Ǘ0_9nWo0вN2<=?-O..Mayj1^3@?rP鴢 ]!2Xa_= ' vwf}VnOV|B8x[@Ŕ7/S6m&-~$곈gHz!̑QGB-EпjLWgqG8 ʡR[~y+ V0͂L\`z9uLJ'=ĴXfprjWv^%vh 2ն\Z+VE> R6+lWhfyZP~"W{+kPH0v,WB+b+V#}ĕCZ6+rqrkW?Wl)y"W.՚ǮXU3~\:~ 2/arwo6La*bWjԌ=XJ5+,mW,Wa+bڼ;HvJ*+?m $/W'f+CzWnIJg j$o$nuyqZߜztM'(>[)Ʈ"y+J5K`:WO1^vK8.ezɸ?{2&}bT|^?Q[_E;r>ԜI#cd1#GԢaW@N'h'.-- `NzlhECfV4,VV4W4RW4!s+B8 \H82@9@wn{y+n 5ik <B{irX i'Nr-+Vqm$q2R Fu\Z 0u\J9g!#KAk'fGr7 S'+Vrz#&PNa`fpM:XT3W;[]`\\ZɇYWCV~ Nڻ&w1LSKpg\=vA!\ڇu/!jfbZϸC\Ic!kW$W HU" }ĕ)I:@oMyI 0HoWSצQgHF)+Б|WϛXno`J7Z#7J߰~kdKAd$E3bF+Vkq*DG\)  /Ps,\\݌wjQLW9*B@płoW,XũU:qTCbɕZU?a*H̸\9mɻ"ZB3b ZՓ]J3W^I[J$FbƷ+V`b~Nqpe8@(~w V'z7 zWᆩn83WfcF5+T;b9LTzp3WR٪8NhŰfyI͹1hdGC$vMD;4(\ ֫qceh 6Fb2P}M2lXLOMGivw d @ׯ}~wU}3sruy~zڿdneXCzߵѻ#؝׺bI_`]?]~\7B/v7cTRrDTt.yF%Tׯ/sW C |!b{hf1MrauFbiViŴwrfz{M#ν?o aki4զLZ+iVfL#ΈpESղ\Z;'ʬrj{Mg\QZ`Cb7+kj:X%⌫=T F Xz7u\JpmrŖ,?{WFnA_Lrj$-һ~hjwg[$sd{mKz#-, !9t֮KgB!ҕtX3!BҦs*bB++B3]"]9eJ[؉d t+B ] -ULR{`Jap3v_P*yҙznd2!"$CWתT *fc+B٨pJsw7nTBp#( t*S&3\S?-g,P C?k.3LKwbq|Xt+M*,Mhu\; ,},--G]!]$o.$T"Z`*v":/"])klRtw,":z"Ve:@Ҏ6+lLU:v"&"]xc|6xY"i9_n؈s]^Ɠwcß؍ (X??н:nO^ߣQkӺ;j{kԖ7om^Av53Pw];f|}^]a9g:=43.gGtK] y4 Aݚ(7y+6#o-2RoO=.;ߏi\OF5UO.F32z/pǂƚuO|_B[:F/~!wsh zM9x?M X>;:i6G_wviI+]Bv\%\pxkVؽ5B[a[J'DWXtKoBoB Ջ+RϘPj#kapFu{ZBixdka&L窞3"Xlҡ+l*th*TJ0'LAhEOWg:H@F}c0FrߡicGr EL?]2W̗%yY^Y)ٞ1%c4J:+Z@ 7VDrT"9B#9B *GrI,tcl Z $oLf[O!JO!"]i!NKWإs rR+B+ltE(e:@2B3,`r ]Z♮(RS•\BWttE(U^:Dr M)[wEp}BZǣe^jAte[A0=ah͞Pv= +깪璁 ݦJapBWVP6* f:]Oe[& 8k qx=UAJ3hB'.JIh}Y'2ϟ82D8(ıSqApW-QhョI\LhiP"MKi K#+2LAhվKPU+c2!"2d;]jf ]i`MBt%d঳EOh5NWd:D2Jo6,m.$CWVPvYqЕUTJte,MU,+B!ҕӔ]9 ,`J ]ZBiRˡ+Rϴ l0fк=A(%,tt2]=W\k ]\#S+BkmtE(cˆt[5 4"kHu\ / 4jԠnxҲr9,4۬X55<3v:aI&!>#B٨#Éh;tc掙[)8g ۈ'DpYw"N҈,},-t%DW(&X2tEpe2zv_JmBtޏxLT =]!J*ҕdJ 6Z&CW ]ib QrC+cB`0 Z QFLWCW"_t V]~Pvut+l\:kWwAh34^] N}U`{ +vV kOWg3]=G;XBtkUsi@FNWұLWHW:$xH(sN"N* y*o_޾v0'jNOݐ<">5`=Obǎ'=U:a{^] !패 \V;Ly]+pyS)">/̼m3/:WբtU=, {ոa\zgg\ m"zX$(`A394]q^ԛ]/ׂ-9|a'a:%٤ [3#g%1xS-hp.|ћ{<>GYzB?~n~|b#2B}x.g78%8QHyY2w']k&s4ߓ.~{?r6M?xU?^_^(ם?qNJ (hMO.k.0,#`TJ9 0AM>[qqGgdS?Ƒ4'__S;鑷ţ%rQO_ z !M:̏=-%&q\ILgGML(1Re奻\&G&$heZLNsw[TϻuyMCz2~y6O2Ѡ~_c|14C|v^t9}qQ)`]a@pYOu =PyKڼ'SBN{ksӯx:]4RnDq)r>.mrKH֊o"Eq@Gyo]y7NU(zաx5S[W4Q֝V׽E@s'eJwLlvAjf[::8mrgs#QkXwj?WTݴѢ^۴7+'ny1B`7 mg%`86۞\ҦB:eoL؀_i]:|7e_}S 6OnrzV__w鿯g?]u8U.æUAlKf5ռ2tʡѶtlst luUOħvn2Zs2h<RC r-sN-q-cBC\&JIײ895<[K͑>tk?+U*_&+Vg^Ίs\%^eP x8#F alRzAkv795%nPC ۔$g|jn+8gle~^,zb[(cJ>uR(/Aj6xGmrnukԔU?(?.SNEá cLpUZ +m +A.fą&euۏ҈B2KՁ=shmIiB)bjŞa$dܶdN%ĥ‚'~\Ҧ vC%YuEB)}!VNyV:S3/fCqnd92MU64V5 аC $VC?pmrny9Vt.[}a't\`%m*L5%}@.Ei*kKo|4 ZY痫u| :JHOGe}gop6X _ϛBS N):m^èTV03Pr[X1CɇT-]%uEҼٴ֍o4UkVo-\߾], |Q7oǭ"dxsӡ oVCt5CR2Z0ԅѼҢ8}Y6`Tf*rbSN SQ_x]>Z޲kynSjXmw Y#e"򛩟C$*H: 'rxŊFUl\凫YI\,DK~Vmhi^:4O ϙArIq_s6 Z L& yAJ6St_ôPN&-gIZ|V6IѬ m3FU4HE)Ig Dکc #W1kdAGYA֛֫7Y2/%ĤZ-$e5n5 AƝ%v2.PCNAL+^h-WvXzKURUrP Ck:B@u7>>t[I?t}'yLG:ע2epؑl ^xm+L?J}0a!H0J%Y@\Jf2IO Ӭay'8Iw&DRB>%`0d5NiO'Cʖɨzu9 }{=Qy M_\_=dN2XZq8|v7m%=۝pq||滎.bԹ*&S~'ݣO?B]9kJf+g!rWmIJceԚoPZkɷYpbR:ImPù=yM\s<1`^VVs[ƘK2:Ľ )*R6Ǿey\R$ Z咒I !J & 꽣(y(`e`ppjY?T|?eи}e`^K[&bQ aLs):AJ6"g%3qQ:\LAFFZW_CB[ZKlָ}\4ꖖ@iB2׏ ReZ/y9(i6D˜@Q/KY`O&bmYqTGFZy ]] 0 Y Nll>6l1d֦bt r5b,[̇VYeˢ^TSg-s:*W>k0iHŝ-fcHS2\A.I&Hˊ_D&A\EquE=bs0;#Ao8K.hs;=`IN\F3-W#,2J$Tf-/ Jî j b8(¦B2'U j7Y?,ԅ8.TRBAti4i3/sTo)E!l鶙"Gf{p"kVe3/#{ݫD232́qA 5{{V$x=3} v_ƍA%n"'RR#3WQo{) gݟ/=חpcEuR(` m[N._e/ -X'kX.D?'.>usP!~\Lr6y/6U\Nqs~p{1/)j^~TH3݇}U$nW>j{3t'gg!w*:v- :Z{d,7::vIE)Ur'g9+XjZ+N}Г Lze/c( ׇ{7яMُ*=,q<^T3r9I-=^@Oν^peNIuALV1`VêXL B[c`?KRw6MF?M[Ǵvea6,H\(m4o݁I 28W]JyK˿f46E}MJ J~,<=ەPP7-5g vRgPvl;3\.dV6kb Z"o͝[nH[L z)3e`᯿is Zۚ<ڻreWGp|6Da+{gq~3u S@+@JFpp[p-Өs;~ [ * ݧˆOژnn1Z]Hp8Ձ^qj#b ԺPϘ|'gXi &j]_έ wye2v'e+Bđj=,n {Z”dr}Q$b LL05X P2+{ \%* `M\u|N4FZx{*PDL7`]&]`9I9$7zj0hʴ gÝ@7ĸ܅5ɯ8 U !cR`г!S#f6@2Ѹ"DY hpx\|Q!)ή<f`A N%#@Vb}7X+žӨ8u-O5$EFr' *բ(_Ma#@(PbACHuF:f4TLմy"Yv֢DSv }4*+ZYmuߕ< [q7Fj+{xu1ڣ{c7z)Je$3GaM}}7l8G֚J(m}Az{60!R!ۅ5^7NDZ/9ՁETEp Q,omv7*ka5B1Z[QHr|A-eI\6K_H,S(PѩU{vT HWr-U%<сefٛXްe4J&(Ԝc0aQTX0Z@ Kt .# ?K;mr.Cݍ_h:a_bR{jV]{o'wL_0|Vِ j(ɳQ`jdFNjb`f4ܾ>Fa5owv|AV.K70paBIʙUzg@]Y[ CLY+YnMA+d*ބTGk@q= |C~+1.3Ew0;ɃnÏpM89l\ϨsdU*2 H_hϝ`Ląq.۩09ɆnzWL]3(Ki* !b\3 1^UxCTDOUڕϷ DHz ^׆@F[L)/hW^O$k"edc-Ll^K -> jma8.x2U̚F4Dg4ZdwB:,pgC;wv t O($?p깁XԁFPPsL(*5ךOQ\bFFb&Dy$[Ԧ&՘So\YNIVΉ%kUe, %"?!5Vhbwmk-䲰9x^:PlP{lں5N5O0DJy} b Nv#@G/!X Mk};..+Z&א;7 $kʕ `.ĎIsC:^ Y6 C-!FϹTYhZdWzf޶y_]~N7T*\TG'9XK!ҧ+3^ƶ\GS*3HBr_` k"eҁ 8`NRSfYQܪ{mǭ> r|`itH9?{Ia빗)opYkRno3rx9m>y*l o=GPÚ_r5:jB:Pلh܅*7rҜTqRTNRTwCM2:綞9k/vv F#֊t^5zvikRKg`M:shX~MW]|+^Q\dkY/㌋t$,E1{)}C|=>YG-V{t{Y,~oK*`Y,Hoe) Kq{Q_+ S4HmKZ+'(E]=߭[WNsL3-mԴ(sևsOOG=B٣43d~OֻJ7r1ȁdǸ,f䀦Z6d} 0Ï\;#orY$oץ_EJuj+@BQhJOZ'S_jqi.RyeM;3;4}f:2%cyz/,4ҧb7)Z5÷qRvS,FߞABnG[{*+_%~QEtlIyQ)LSN|;BɩQsbqESҭ)a@gU:z y%':θoG˗˭?-::8O9נt:ARwK {R ;ΘP)Lba՞:θ(i}gPN';"u>aSW}up0.*C7:%FBxB֫MqivS^˛QeOk\sOR`yr3.x. ep55rAIsjϒ3Woշ|seYZqnEp,$WoT5o@T5@Əw7C HH8G%W!Y9,.ڶ*SR$ (A2rܽF;j hpJ?VTaH}g8m|ܷM}ͼZ/,\;I9zQh(ZX8UV-gj(69.p5%0pjh׶H0O!p+Q!Pc V8J+嬁XI|,%4;4湏 $xlSHdk K>(tlxkx4Yqh{XP{T? qL!ZOJ/0~"oxJuG͝9BI8.8_Ai%y$&rcwBIqW͊`>Á|I`0ZVKd?|A <j5P(SJZNqQ5h!9i!q8R:0͚Ə,`^ʈDQG( 0pq ffw9Eus(Pɩk!z8 :tl _g@!6%5Hkj~BZ&אOPY<r`pJUEˤ:hSq4LZ=!צOU0!YYWk C9`r@ `jڨ+ :9+ 6(A"D'U>3CrNφG0l] XAAEhMφeqZ N^H(ܱ[@'*{+Zg?hTXN7t<8N4PŮ Ca1L{L:k{ci0Wam92oԍ OZ9!rlmcXP;te:q2̭][vG-,⚜3_mC>KcNʤFO#leS,FCL1.ZZwSw m[үLWQ2[C^8$?wً=I<[bxP LQjC<2d<#wriElOqi$GN.Dn[vӤqRy~݌OG%=$oX.F~ 0<"ҵ75eH%'T%\{ Ȃ9BsPSv%)G`4ZDM)RdHlMcI^9 ́*gdeBGRlKI(k [f;P=>SqM8@<닔P&IJ$X>TpsY[C̤a\9Ɩ۫tXX("kf'J|:]=eJMB}ؗ& b 4,T?Sm~sy+*~Hi{Oy> HN~% y9cUgzx0r\vH9 ,Wdr%S?/Q.8|τ\^.o! GZaH-t@3BN7>q>\HCVC[DgD *Yp}xCCqTEŒDw~,e7Rd(XGxtόj0~~Bq(v]zv%fW!{iE*䷻ y-Sݤ4`?~iABBF0)32XZ$2 Ncf$n!q_N"pU N ]y4qrީݒy$Q] vmrO{@r78#Q^$sA)~t,ĎDi5j~ֈv$%ɔa?f ©t3)*d17TP\O"H=klО`4X=IU?;p-V7ҭ|q|`PN״'˦nVAzԠk V`lX.qjE [ --Wo; 2%O~Cv^8uvݐ)vM89Z54PgG>Gwkhݢ0lb>q߷A^YFQͽ+j7 *Tܥ654 h}kxkh 7n >Qh=vEN/{+dW3nƽUq3DAtcAĀ+s~4N:t~ATijH'$_ABq?f?Owei?GrnϿ6#I{\d&懱vRr* #j&x 0 \HZ`|ihZSmdf?FAn!Ç(vMpI"'8n/U |VN/W+η 6J~xYvK8&Ç p0SsƯg0F_/VܦrQs/ꛭBi{r²_Rrq!O^?lH8 P,^mdiBs}]P 窓kEdScهfnf4IF 0C]h+1.,҇9_h+aYZ wTN0툷P{a^1m&Lh4}2LcZ0 J. 7H$LVj7\`םC_Xm*ʭ /g3vA|2- M)xЀeKҳBYY3RGl=bUIca"DrH[ #{h9l[NX3b4}EA\nFoE&]07^ϿkK\hQ#s?WX~6'{\.rTJWKAj8 9fB*d]NP>)|l ,&/^K>}al}Gt AmдqaFc F%ca_`&60qq\}Dkjb]B-K o +8#P; '|M8|CpmsRabD]iN)]U<\PZvG<8xw;kv~Y~2cGEJlAOfǭק r~Ʌ\ Y eagXƬr-zIKayJ&7ylR(un,=v!g6fWKlos:bƒl`˯dS&!e3T|%B!,;ݒoR~Jb`NfUZstچ I1S< 30ij,D̚Le2}~cg |$_MDuqHZ /gvR!/-kn9be9)>Ehd$}%mu@%;- `xt˥ygv M}AdZ/Aa*%GVBnYNm8i\!!9_#&XUC$N 7GwZˑzxm0XAT=H5hv$mD,ɫ֓ZH6BPKR5.WH?YT~bAv${`e+) _oy3"!^|cv49Py`Sl'@yWS2JY>%}& $0f<j`k9ma iڕ\JXT'n9\ݧm: ,{*hd՜)ڵֵ6O}vSᢽeŴ&&gADX׾h2v/J^oq%kBfB8tx*R}gd }Sd/FaaȘܨz^NZ UEeLa]xN2U3ICnu+w^aq^ Q`-}WeY6@J kዙgJ q5ɪ"{O4 &Be^oӀKa, YV.nE54zR@zbS rK:$s% ^Pm`r>91\?m4JUR ^Z#*52JSzY/ }GJ} F}6R:ϩG i"DZ'K!rj,TCz; 5„SE,)%ʘasj\C;+7ASAfSݮ.]24KS?Z?4amα4AaEVT LB4%r=/EzV CuyہR%ˮLF,_?B MN'4.khCz#B;_.a8HϦs\QCF'W4C;8i-*HfOgu!q|į7)!^6PNjh$&@YX@2$齞J1_K@P`GS:q{s.`(op2 Nm"98,0Xǣ;XNkh\ &C(WZ flaOzT H|ļ +Y}_5\k~ 'A݊abj,""MPBzÛ -L2H )8c*ˑvZ`?J 8CRL.餁vbJ(0IT40~!D@{šODw3b+q!e @oOy~9_[AŊ}+hM4֩q!P~EWקIсJurI^`ɝ:ghmv*;Ve Ѯ)0-ldDI0*A DcBqc<[ H/.aNq !ʼn5 ^0 M#Rܟ>V׭JXi&Mp[M8u|5>2:TMM`1rZ!D09Z6)׆ҋ2Putj%W-B-_ RDZb 8JƬa"# Va~>@ݯJpPzZv Q?Qv`tc MwCIPL4@Qlh+D±8p74hV C::zFi9@ :.iO1ogL8)/$_PZ9 lhlo<3kw0T0);.`xv3e,t\35 k`Jݏ5Qf_+bK,>9g.O]z\Юmݔa8@DVQ@ #5$F$ъg6?S1/TԽD^26U‘NSl)R7(blXBB1`*Տ45R>geсKjY˭T|EIhycWPT~5(Tp?AC%Tl|(xTGhB5lb#Rl' Q{#Ɣl%&.Ěꈣ@]sT3c`֠U@Bc\G+a]¾1fm.j@kYPL(5/Sgn8a54zo]‹! oDD>bF0J*kdBssé#ml\|.tkikI{4wJP,,5MXDS{x"q!i(L(,64vdA ͔c{;qO4^שUޕ$Bir_ aÜ>PKDHZ5"H(BX,dT'I=9L~)u2U,2>`K\%iQoil"{2$6q8_¿?+<_?gd&.gxREK&D /+XFA4fQì`s gd^~PkH=C2 22*dd]*j-q2xEK> ^愨J1b'y90I"tS;6m+ 9Ċo.c2opj(Ξ08E|zW5恬Neu%7iƿQa3c~n53wwWPTFB6vPܩcU0mW.j&nO]2^`6=Jm9QC Nu!azw&1HksFiG%FM>/*EK:*5R-:$>IG%߮߭_p< ( ݿɥ'>h xŪ3G+~T v {>i_Y1J\·pOрS xo,?QǕ3 ~ &e0oLOFB_ ůA~_}Pʟmt΃dq~|k~yσhϏmr GGKF\|Y ·p꿬(5_ lԫXg FNi6X/fŇf:63#"ixVʂfb0_+C4ۂ2ʮp 3#e לmvIp,k8hV:sƁmmTx10tӸmN>|6h ^ho^HiL;/T${U ?QC}6ҽ5 /bMyEukD~{Z>hK  LQ1厚N?yWrimB-^lsCca|mr&~B 8 [~ >WDR$g)f.$.ȴ-p̓s>dX;?;7EbrrGCIb!zs4h О,g9wG UecW[FFи mtyc)ɸEFI1Ϝcy G}VSTBPGs0g(CiHmNpH9eܐDhʭBWx_?ǐ3pS%˽i`dV$:8{ o6,9/LLyV)Fc{QOhu4b]%P\얟vo\r/WI3bjsLu2j t[n$Cc.r_cY;f_4VVsT-*e (IS$bowͅX}e) =9@o_M4e5U3l%_3y H`9&sIR c ,Jk*f !^Yeޕ] E۔7WZ6X1vAs_VYlX]nY_CY.%L@' [+): 6]kܗB1&@ژuxxx/T@F2)F0J^)1*qPl^-v{q3+8ɘD T\ Xw"Q& a%+ a[mFa?a LcGa^6 @v4-aޯ9]2.^)_DIT*u!.ߕ[IKqP_2J=pK/7Wr'*vgʝq2v%Ir $K#ĉeZ{/62vw/!BĻzݛcb*_uK1;H2Y2"eh oIO_,,䠱"pqKf(.E d>N&i.qJf*3%AEVǪNj\.T$9sR8 w66WTXLB3"p" M4Z62 PGJ-MK*XVϩF.w"r&D`I)TOYB`r}fKYΊ0FJis}`\yES.%v8?-v;bY۔BZ۞||7ޗQHmgeqP1_ :Nq(|)AAk2&|" J1 ld|[ni"8{g0R?iz*Bgqb`W̘%ͦ8ԻDh RTJ_-.Z#,vʮP s.ֽ=]R͇H$\ڃ,#p((DlxL%oV#1#)iDKM^CF`1[ Րs-ۣ}(Q[S^'$0yF#""2vۖԩ2q*1$Sy]SmzS(Zz&YR"ڏa lw2.0+w.XhXQz(a.,a|)w!ǼÇpwUBJtFE dt> mycu1!Ilu팂Ӌvfn&,wm4ɼCV͟x8IV6́վ@5rt۪l:41pPh+ɉ^ΣiPT1t$?#5uL-ETX) T-9c~H)'jUCUp 06Wwck1o ##Ɲ ]IԈq]7O6c/1ƋG6-pV=qp,_Wc\qaMJ&UN$ =1'`M% .֙e=|ҭQέNuf gr BF+ԃ@?8a[0nMxI4~m׾DyPYGM6jm3hJ#4T?r@: FZ?⚲6+Ԯϖ\|V̛)l]cCE?=)wMmw0$JHKd>"M)kʑwL buxw6o=/sjy\_ݔ[)Asg$f͛ƍa[PD)9o.1z9\ATd`^^S_MXDI酌[D)ӷ;?%ms`i^4<"rYE.*$-M-MEeqm|yrه|bբ7o3]>>KÀQ gE85#(8 G~t[rOO%:GZ& prdBU`1Ud(c@kꐐ($uta [nV,;3w9J'_::~M"3Ez|񑴵YkFcRl+ə;=<1 d\acm}vH] vlxI#.wprS/I\ ó|ц)7UYc` U OwͫktfL /*Q(oWU{1zߠV?iE/N=2ܢqs]8|ݟ2fn=C觀#>o$w_JK>G3;4LggŔ]j⼕pVZ_6?=f Yo8 Qm 7 %[[=|`H^5m]FfzmC]MԱjm8l}W٧V>̫mdn+Vy[Cy 8ۣTxLL5;f-={L1rCUw类y[##Y9ibO >Ph<uH o68h 1Cw /6گ~y.v{d{dl 4Vǩ H2,Rs␒X8ظq0-϶s $s]ԚnL˯.Z\p߳k|rX|:|}ūo7ٳh7E|nvo߾m /,;cErSU{9yT--Br' 4^li,k[©>5, sʻDTtPy$:0$eM-O<D)RS6ņIj]*֩x2N u;u orB"t]4*h8DEBf!@2d(#؀߉h8\pr-vn$tnۉpS* ADiHfnt/r5m*`U ' ]z >k◓)̓+8R?c=b0p,.X(Q3Z"4Sb^C( # Fq)dPEB/ݦ蒊4P֒ &񬲡4S08~PJ:~Sctę5[Wj|HFkHRa%d)[x/М  ٢[+U9fn1ފr/:͊iVNuN}t97(]jGrh$p;v;[٢{;>gO8Xfjtuf'jvpF8NSAj@v7;-yT`UC4< Xv>_W=olt9x:f0uM7iO<E눇bl) I'--[jTsXpYR){1Ĝ$1m2UX۷=.*b9(IR'>}K2HN1d7ٟ$d]mIv78MƵmϡ4&Me62}k׷fST)Zj5ֲ uYA,QʬjkŻl^9YJ%Y2jz4uldievo{:lF#%<#Fq8z&<ExTs4Ox}LXx鈷߾n. yYQx'7r0o|wg6EI] !/.)rշ9VШRXwIJ5QS';uYrRH^:69]I&F@U} %=gu5Ǔ)"+hԜ+sZXǏ%esK2{=h\o ݽ63Ogu ވmcIn l+SJ3fȲ6ܛeߵ[.GC7$c F ^:jIq>}Զ( ѪȄ)*ڒC0Jk*h[7z09;!9ŀV*[D2^ i[3oP**f`? o36VYё@!jidNV!xeU`m+ ]v7iujq(k:@hM[njz]. ˳:e!iD rdef^4c}= ?hcl-dN`X&AH,B(9k6 N-4%Xv11ρmI{7XMOϏQ 7 r[.8 k8a"8Ⱦ5FV^g {'wgW6,1Y1{ɓ.EͲUv-n1llEQO{N_@jQBYl>~M.hn19{4<% r$_A,edr\*V$<EPQ*$:LW ^aL8.շGR}'']FVװ0E\{n,O=vq)MICyY%9&esVH/^4 h@z*B.p#>Tr8IϏGC0-3{gvȑzy 4=`>, ˲Z_e]JR -ee%8'PzcFޤ6L;MW #>uysu,׿rZ n}|sxz?yNlt=Lc˰wWOZ8986>JMΩ LwG~3N$8,?, ܧ5f,=/=FV*FeeL)T6UlQH {吳Q$&z)Ca>v85s^"|hyd%8kQW>)\H܇=ڹU3nՖJ`UD̾ml-cٰfLtM+8:cMV]k+X>Zs@C@p/F/ւ2'ؙu-ʑ3,F{LI砊Ggoʚ_[;!wP߷!P LZQ##[ḬٙLV.L-P0PEKG)/G|JebVfIXS2:.EHџ> 7vXV5YU!IyI9 "R+nJ9'xPᔁ1gTWՒk,jt4{)}Sb *rD$h($c dƀ`>^2`e7L(;R=r޹YheI;>?y>R.?(땭?d^DQć̋J(:LZd {LssG8z2hAZh*b#|=9LJ1 p$phxZQM@FVOPa$t@۲.t[tN%aKhD@8m%!3WВ1VhRKóyt}:T̀@p@']^BCL|? 9gW:v$~-1[st)Zo!Zq+sQBj԰Pڬa1 Ϭ0KZY~ǖk6!3\dE gP[ M8ZzػR<lۑqG6%U&LR#c3\)DV&{C'/NZxSuِdvл]u/R̺{<Ͼ.3'o};=eTӟjuyx4?Rc=^=볳ϟ5'S|ឿnxVߤgg9kuyX>itJzU~_x9G]ǭntN}W/ȋ^5i\w9(> 3 [Ul Y@6Xȅq<o5b"mܶۆq)a!9!ctImü%NmV~676]EPJ,KnL_dJ6vׄ3@!؇'O>Y/tX;#dmOڮ۰oF7F`8DWȜqnwWE=r_M x}Ic_F/^[m"f%DqDz:3(vcEI5vaLJ9-w,&Rpn$m4vD{gB/]o"8],tԮ>9mg%gtU}ȷ@G'Y*!Y+_YM+f'^z%"sɛX uvHQ5.Vdu,(=%'w{Y,|vmK;F ."2CLأ$p#ש um\.~viLku :߿\1qz:?_q`9x2 >7g˃-_l7zȟ!>5jgg;?/~X=?:o=1lwDk}0x]$e%yl5+Y a=j\OăhK\hē6fR@W}c(,'#526ιGO#mFȀ(ܗ$\K5KIn Ź`}@q)^o_'Ȳ=>MwYH|B]I-,tV"~;"bR~%~2m6]~OFtˊb\Zy~EZ*zVv/j7[DWߊytp{jaTcoi] cBvrtkfE,yZO$`Fѥі[UFᎵgv)v7DzSFG f۽V\ 0D ,Jo4A{ DwLƳ;\B?O$5Z6˪ׂ}0 ٯ~2η-`[YQ7qɇĵ&L4zRhMaʌMg-UoUqZ<ٔ̎7}H^v\o̲e P;6Hf~N_ӓ~0ֽdYn+nM| /ŔŒs*ӥ ZK6IN>ͽ,7~ffYh]>>A6[:E\{_e fc1xQ89JeUgwYsFEqrrVpp-;MY9vhz!Rߞ!@,bA5b+gz\QF!.&J F'mtsv^tp#ef i:8 Sz2J{Y(q(%FDb$#E` s;<#K~#m"ja190ҏ1Adtk!Olw,,fa1w>~=h,0s' &6s @o^k6h~2xc&;r˞}mĦFdwʍcFVN/]$`wy-;vA;Gvow3ƛd,V7w<cXsTyvsɥo**zs0]8XDw׬f'ߵ8`sT9J)u1ޭFhH)oF_y[g~EM4jԁ7s}'eAIsn q2coY+ݏ]rLfK4*Ɩ[#sSlӝ d(Ml 1^hlJꅢ'm(dݾIʽsxR&S4{,RyGav2y筹&+91'=1ɱ*rUp55/fQ7[jVuL͹LM1rbb&C泰%T&г EwaLեQбxϓ0̵u)LAujMT# IsxƄ%T( 1M^nRcS'NMQjsSF4Kas[)헲o8xEEu2D gD|#[Ȧz+MX9 xlssnT%c!'ZWqL`#ږD2q*2 sPl`Jd뼰ζ@qlw6uA {PY&7{4Py}TOU聃>@ 7e؊sR Uꢇ詑Q6rv:q-ut0 G[^cK5 bk =}BK^:4fџ> zLdX!ǝaB6;Ixt=qi2r0aLRؕf c'quZOχ}_=Cq Ĩ⧘d jj㢊K>ʸ22$Mu--&G+RQW{+tS7"-HF `E?9^[Ya; z6#A+̓[T d㌴$Y+DXlJePjRPW"Qksy,yT;GNV';t g>W+ǿo@L0̋& &Ǘٸ)2ϵr@4 n•*Du :ܓRbpq#sܑ7оK[()v@X0]!2BO}$#SM p1asp.nz`_l-oXWg 8xG{fך}`?io+-;DjIP :`f&)m2|dǞB89ꑷ`+L`Af4:hqHo"xXWf#ھ}pIsV#nyG`oog!2>imܷކfX ܵOs@7~0E k/g۔z9*p05Ru^HM&c,h>;]r` tĬf"fkTJYʒ,R)Ƀs*`m*#e |.NlRO+WU>*E`!7Ooݜ׷]މ*W'{tfɠd)xvWyT{q0bRo}j5K }7#U%*%Tv=b ER(` w+Oi>I g*⍨R3yx^ *$ŘZ4 -CjvB-$pg *Q؛4꼎 x{_WL>e33џ f?.hIJN}w;JZ5tC}omXM8|!&Tt֬+yu+Ugߓ wRB+w7܆GhY[{)"xz0"[/x' hivZ{Nnš-?_R{NZF/\Akrl z i._ޚθf q6wV^K{+qxZrkLK~Y}pSDUtlڒ%-4^dVp\GڶS>czc?]\9镶67rRd;WX/!pkQ/E({x n;A3_#t!Vb*%ZnnC T_W93,Þk!\[P1klR@6(;Fg*.Y$8R] CiQE .;r9G b%U.IG?yH?0SF}TkFUT"3# TwYxMiVcH{hr7ȥ:sI0Qe)yM1r-R-~{0=F;9 JVj7ozVwv: { ryn*ү)@>WfHS+C9Q(œ +[$)o +͙>'f'o><WjRۡϣsn=jѣJn\TAC#Q^J&t5S;Fg? iZJnlX#v:qJ60jFT;/lGeڭ+ͩ#cxNv" +Etvv+dX2<j7ksNža–,QRF턈=}}|z9f~?Qya/ՄB¸Y;m7/ԖrLLn?hts:}PCsyԆ9y b1&hǒwZ:ҵV$Rx؊'E eEkm4r+Qb]QNdl9Dmc,+1_S~)?]se^rMprЬVXۚοYl{-PJBv sKEq;nΧQgg le~y˼msϼ_΋ ŪQs&ySli^0E5Kwt-3c tBE=Rb/ˌk6 unx:{f'Rq |* 2cȉ{/ |=F 7ےF8(ÁraRQkK1pn>[SO{Ya!m mͧXJ!M}5;,R=XB ug VU5J5Y(d':o@IɮTK6wuBv *ߊ|^|؀&a*ʢm +CN\Թ#T'[z]y~܈/!F4%]Xx(ץjPS8Q;WL! UMP1x*Zpu.flQ9sJ?#rJRVuؽ8HĨp>*1CjJ[j TΦs^]8 ~pD:/V%wԮ&rG^E23QLR/a7;NfmY#:4Pߋ`UԹd&R3к2՟ .xrm[5k9S!qd6 {.Ī'6rTDIz>zRէ<4'iɗh&'nrw/E[_vI]`e,O;Zrosѯ?nUvT~Gv`C[e?+V`T#ؑBtufķ`>Y!d[K-F@,bֆ}ɋ$(mQ6v@nI۹p!9Xjw^@/btJ;~ ^5B!pꟸnp;e@p뎜>& xǎ%OgGNx4땉2A G֥W<'Cul6tѼ*x3/>rW+Ka؍I 5z6)t#V}䝥YܪMB%!ڃ#87 EX.$XV-VT>.1 uUu Cgκryl_f2!W7s hCid! 0Ƕ";wS ꀸu5I/"X)cJU/ZdEo= 5ᴑMP|2lK"QԫOtՖf?4x ^Q珳Cxgv?}?p]?J>XdͿO5< KK_>M;zBj>,Km-ec!:j5g9Zf!nL$8QZqR;U}<X/@D4wx@ )mB6/^6s@n*o>C 1=]`Ή=رs5v0nM_3\Ma1*`0X!fTҮ C2gb)RMekknFu^`[[)ɲ#ٱ74(C!IDeg}YcBGt׊u0ǩJ|=Ȣ'T(N+T>1 ")ΣIVB6ɅX*% s*?!uQ[= Yht ߛ`:g}#ekMLN\ՃQٵr+ך`utLowN]VVZ h[{`l(dvEȑ=k~[ L_jNON'T @Zi9x9/M%c=!eD9`łs,N G{bö~X9>ߒ~S)7:xdc1VR_߻W?֫ ٪(΄7Ѯ5W⥓Ρ|LZC%ł}jR(4N>]X\I92Sd6gn5.X;LݒcI62F i09gf(1;yg4,E$0f}7 릟Rk޻zod^EM(j bS-&8wv5z0{L15+] d1&$31vsA10~yy/ءwKގ2jWFʉAɊ"ȞAf(÷nQ+MJ~G)`՝$;BRD򕮽Yy6@T$rҗ0+H>`K>wF6ͺӖXYGM_Vו{E_3YxgU1!+S!8tNy^rTEt=:r\}x]W,2gmU̦|) 1^WOr'jquꂜ$fKf򗐣5VC'@C, Xꨶ]H!oDN %QNy*2BMNrĂj[{.$V|"%Y!zghMW' 5hrӕUZ47UdewK.r*U"W48c}RBTT/~UcrmD?1M` qf& hAu#?n8`}oE[.z.7)>Ʃ{-aSJ֣SGԊ(@RǮljOՂqi[66#^ĝ㋿(QK(/roGzE%b^vz(% &TߌWr)k?Qǖ|0H?Jǐ'ˍdgh[Z2!n_tB!Mg-8'x=̐( T&)BKE[Sm& 6x'̵)n-mԾdZ*+OE3`iRH#6i%ih-msךOvijH~ݹZf; 4 =6!7'|k@][x?]Xj߻gV9 } gȄ1o-/dZrdOqn5oώcX=*WvZ-y3re5"ZGhT'Ȟ9>k!Wݠ1F# uLnlp[d-ADȌIPp"Ȟhkʌ(9E^Ĥӯ0鹢d,T1Hh]3_l(ٸ}sֵ(kY!>(;m nZlCF!릶dэh_uWH?ٚÊ#{"`0eNLxy[O^LsUZv䏙 Xp-5},+BIiNi ^ 'V65T]lLlg!%+ڈd/"Ȟl Zs&xOQ:w]D|࡟ts~8G#g݊7ca#ᶠ'8%3E;C|伶y#[+DPד: pP%oG88"+3. 9'׺9!j] \]SF`3/)P `}z-ʃF :`n7- #cu `vӇllv/ ;*1x>`ϪYI[]gpav@bTƕlt1$ X>EE&I+3e)aG KNM7&AM ]duOAP7+B`v=dija<;KW)ÿ($2RgZ%yKg۫I՛I=h%zksd{C ^XF @!1[, "XU8͢%XI`kL!2 rp%fHmdsa@0,kWB04l cղ]M1P}HMfaYg؄V+\K,AKϡb bz5GF8R"lm34Ɓ0s}1..X"ΧLBrzih9}XvMx9Ld,Rg\2NuA bS=\*6kٮKm"ӏ $@E9 #]Yo(DC ϋuLNO&ΊJXR'bo3jr*!9kc3Ktb-w_Q%ҫ;JIۧ5 -u}L'{0h}[&y`ӈmk=Mx5 QΔgǷEGf#6}l"$FU-c^EidE(5h[4LHg9-r5⬹06zhŞ O)ofON\m KHC0ato97a@DZ*VN92\ǔ#EFBOWaD#yfA!kݒ72 nL !G:bJ :EVsk*@Bõ2u9/4Mm;kjkwRElF\52wZ.G4|m4nU>(z"uX^-*O*V9׏M$^\Gh\-DASQVK--A责!El?>cS<|s=Y=Ʈ˩=Ivkz0jN( zm,DT[>c\n|š=o_/z'j;[k[6S޳1#=@$-|;L_p_\m 7_滜955;(A3drɜx΃qӐA :6]-{(! !`->n5b.$|_]gՒ7su3Gj4zu}Oȑ=-Xv9B1 AZU?Y=n.~m{^M}@y5%o&FZ<`莐#r<2m6_v-ʃF^(K#数v`cZQ>AD~0HBA;ƛx/?ջZ=]L5߾ӿZ`ſޑMgz*0r_J D؁M?d`Zb%2V@k-H7ms,^c!Pm; W++k?Q58kʼn:k@;avf6D0sknܾ^7cmZ~cӆG6!vŠ|:F&j]7%.+t Q @eOb츧t>W3QtH2ogO{IaNhT%"+J?.KޏNaG>5]ގ>rxIYÎznm:.l,Zّ~=HOUfJM1!tjىd캧hiԑe3ֺZiYUB@{pIWQDt4BE9aKXVSNlxx[rw( !'6ĆN|x_vil}L՘QL!+ׯ6ei3g1m3T&ݤwds0#*.Og~eq/y?N)!xhҘqT2Q,UMkchfʟ|X3Cwo?yOa>A`e>u >bǩq#v#6A9^,tZ*e@ŎTp<& Xxsv1Cߘ2mA?!-O=]}Pszr3O67 X 6O+&3 y|hs@vXqXK]-q)YMv)gh +ЌLxv)h]JB"Wgj=z̯ $Rfr1i$&m cu;*fjRQNv/0ݾQ;ZfjԎ?.a&RCt]v@Ǖ3qF}q42g<} /Hͨśp急"vkv)SL[*ݎS;Vjу^= RaׅD\XXTh#;>Vb䱬PZSK^i2:g{'1(.I{qR:A,/ ]x  5؋TTc dQP֘0VU GAi1]5d %ǥv3.W#S\643Ї:q-K:(}CIZum_cܿN[pT 4{f\ɓ"#^yQǷk&R1^Y;obn=3bs ]Hi>qkm#H5Qj s\ըVj֫j֪j<@*-#; G[:= RAKn*T^+{\j3=bPK<^)p98G@72j@I=92;*8 rY5ݎ3;o'fG#Ut)83cv  ~`pWXA6|_~Fgqr}b̋Xw5?Fg|^e ^jr*nOLp9H$F^ZNC HBRрE=V@^}dH@ iz򿺾9?P?E\<+FW7~]/^]gqT'Fc}FiuhgL6vy0U 7Yh$Iӷʤlz %ݩa%f]O#sM$xƊP6{_Y[xC*+BT5F{/w,cB(X{-\ -yN6㒺ȾNxq="᨟Z}'jqҽZ0 WMɥ WVwREfU} 0L dA{s&i5N"?|X6kP:!AlP/xW?.eUlkA&J5ba~bT!ʡRuU,iPX嵵1` s] 'B({Ҕx&·u,QL }(gXאV(~m2S?@0l[[,X&2 Ž{W2puC/..OY"!y8rö,Yr[h3LD|t/,ͳc;d5ReIҕBFD$VʲHC *-- 6icsv_ҍ;pv{h ɸSv>]-&bVO)9蚃{>wd/ʯק@/x,Ϸ>V+7a4ַ18z0 lW=yʲk`i= ۇV?CӉUƘ%{grűT]V{.蜾 7 VfeYݐw}&h{'tݶyGr78H;U_*s]\7LɁ= ʽi>O`=QoovTX[Qc@m8}tB^""hT>>;f5-[g1f c*,xKzqh-8.NWWCi]]M+8,G^{kso2R &wua gld!kuQ`d9(Nk ;[-ɽq݄TVHKc:>ˀEP R1 g :]E^.:}RRO{.og.$kRR+o0џvZB;Os:gŌ=K bx =OsVZz="Z+Zr宧G5q֍UoKARQ:jR7$Sʴ8kiN[> %uG:Ƀܹ.v=]^*L̩zjpRdoB_vVӅBc6YR'8u57 b1X0tW][%vwON*ӿ3s^/I擛ݟWR<Yp?hב6 9k[XN~.qˮq(])m)| Y)ymr-{饨풊2]erzZ|U2 g)vNhCܓ0gtœuwЖ gw$5n^y((&; Em{!|SA<!ױiInZGj{op#e?}}XNJ}Tpm\FDS<=nvea Z0]a^ XДWF@ugƪZn s|4)͛ݪdnFq|<qzԜ9N.%pp7U-E(Ϙcz#6Sa*4We7< c1rYi3Q.|4[l6k̓+8Lǖ;ǃw}i4%5-!/1i^*Zh˚V?മ?}BsY]vοX6nY%oV.zc褞uP%Q|s"3A@X#i<%sns*>딧mLuĞc੮Pt{pRt~BRyVVʥR7>t[IO"SfKi0!uO]%X vE~ >}_GtPeAϴ!1Iq#9he3 (ɇ1q?޷ pf|d=4M%HdZZ7qw⟯3C:sžڒ޸F2|z(t)ՆOU$up2s#T}W,| TT$ۧ:N7C)&jfh4C(;VdAo+OɔЀL"^k֙ݰu*<W-=$ct3{(aG7_ARmٵ?[/T(9ERqgE[7hN&d0M?ScqPQZ o]" Pޛ$?J9 I2% `zYFLeV&rD[3d}fWmyDִ\ر6мHffzoFg/%88[$R}fZ]3=n_znpտ^[1_OOM:4mյhIŌbBט}(%+jEYעpcMu\krp2 nugPV~ϱsNr14*\m)%+9BdB4ȌŻeV _ϻ.~ѱM?f寽wyWx オ\rge9'e[=. \ǫ>S9#ISb4GRw $<2 D~HOҟJR^FHh=63L~>1&ԫ\):IG{=Z=y~rSxS2-z#G)JbsM49G]m1K#~dpH\MqHIjq+K4X^ox"@K|je9#Oh뗕A8|'*;ul4B*.١"cO/gvT!)qƊXQ/ߴb씘ð7LW܈ 8c$31 _!E "24o>QFg^:3\ lt `iDUqL4$"` GC;=D[c MKĿG3x -* kT9/.Lw8qlK-~$6>%TDScSR&l4e H⠠z30m)Mz;LgL(&A[|0'n4i#;XD&$ԒTXs;=-m'h y ?KÝ̻?TN> :''.*Lxnk>L q7.e߸,oFaB8'͌ɴM4!$F3-ZHJYF#` VJ'Nf9|*mBۊlIaDn$/ކV&;}"HZ|Ō0£1Z` 5Ϩen>J<@'9x'S~N$:xV7a)KP,/٨YQdxpE!5V)QGW YeZmŹw؉ sNm]NkPQJkM,M-5YR^#=r!>kCzm_7jpj19<Ԃ Ő4i:)''l,RqMc됀Ebl4!ehGKLƑPF^nx\x-Ji{{C*Jm~x2i= (U x9;~Ą֍ 9fswr"Mwn9D@[%Ft'pd:2pn9`<%8cjT]N.b.F}B)oxGagV9FՎ l rU±xepJRj GBY7 h g(U:P0A 0{ pT`])RB 2F2>bNIjJW &PL4?w&JzROnn?0F?p~~[|qgaQ_+e~Wp1l:'"?.;T i*$Ǹԃ+pVHf$t(QHk Yz}Got8nh85kb!A:L2[2ggdn0-k[< b>LJ.g+MJX()zjtTY.AMXT)XF4yY7]/֠Ҋ},b Z^'uI$Z!0d3>s47K̜^"noو9+q2yd?sQa~L' g4 3Dk& 1$柬hp3iP-('>>Ю"곸:(%>(7lQvtw5"`2QGNhb |QG O%#*c1c ld2cq0{tTWG?NmrUEgZ{8_!% pgٯq,òoDO?d)Ty!6 K4wfvtuթ*P*+!bݮcw6,HDHm8~(B Ż/}~g^ /g^"EkrSmJO5uuNs\4A;mAku`DҢN*FLΫTkFApjARȀЧл%/vꗯM=z9r`D+0dvצu&zQ6a*^ԪBR&(F~I^=ҧ ы@$h9z]:*9)nsK>. Z>F/RbOt`տ01R-E`h!1hWkS-(a櫎2\&q"1`i ^Ԋ3ak#c,xD]6B[.+yt#L%/J7Bq]fZzm`̵RM嗀ױRReЈtI0Z-jύk'0t8IsaMZX"5wHxY'FBJ7*^ (lU,F{HT/ 7%۳8MFu8IeK4Z~e AayeF!VlbeeEBU!Y}֤&K33̫ 'h hy2WKi &Hyi JY)k42x ij}ښ̙JrVDyddyL{D>$2ET,( &F2\>Z"EV(g*}ڼYSLAQS%4K [1k7$(Hjx pv\鰭%L#TQHK Y%ܩT$Cvϓځ._Ps3;t$ORZ]]vUj0eP;rM {(j f88'n$i2AB o `ڽ{!Ϛ(8S"u]ǭ4򐧹>NŒ9-ߵ3_Fk{VJ3A#Bv;R rz;) u ju)nL#=H'5$*N^ (d n<MUFIf =cJ"XJpc~SN>wPB$H Aʓ VZ>gcHz*XZi9Eh7WD0tOFfaJdv^lQ[xIh:PyB!jM0/nQXBYͪK)9Qj]*E%lq* %)3;FKbvH#Ia@hҲB ۮR~=%z>D ڭ9M0mV$C( ojKON5˜D -jjwS:~ վBX0Չ+.ȃBgEGnPP&1M)JYk i%-2 Z>}TF[TsgjN |h F4_E+0ZIsǍB iG"@P&EM-""fh=CdFpxjSVoAא^{~ZÏ\*ALR#?ҠHy$يlo10sr_ m iݫeH'W8^tKxWː\h&er:_L]?S˗GYNnQEiwj/ϲbڶ@Wrf e}#>c36 Vf \vT==)5VEi4&ߏVW~Ue콮4bSdhY̢,\̢,v15Fr`1HBt"yCDQz@'S:j|3op!dh^dUzK&C9E??7S LPp:hF^I\݋ymIhܓGKol\iPy^slގ<>o]^pܾy79Jݲky䡞]],fGbI^ӧj4?{Ϳ.P{.4?v_pay8Hd.{\hnejg]-w?70oo?ygg76k2G~?1g٤A;VN@ o}Kb~?;>1 Qqs^+`1(F$L"d,Tna{l&{[ԞH""[Va"n \Q9緹E}~6fG~qjlv4W߶_,͝k&LEZnmG"ԗ_\(S|Y`4⣩bR(vc iBq)Faܴ68 !R7I .'0'ڈLRÑ).>-^l)iiZd5dXv#9w&YL$5..R N1\ifB"wa3i>iȸ. zt?]߉ZM}eF_˓sGTE2IX%_!U9|^h{xVU_P.;!%V lbb>^QC zS1iMjEkeZ.dcKs=0 9SݘS7b@O%lOL(j[£&̶ģv䉝y`_D L(ii&}AQsDfȱ1Hf< ?ƿ#慑n #HɁQ01z$Ձpo 2two]Ar<!H^?9l4z7v&d<ť=^ 3l/tֶ60Ff9<8@À(r&Ѐ-fHZ)Zh]y u\ZȀЧ!*U8ytSJeoQ \%v#>nw 09 x<w }k m| aS}y9m= c@`xГp z@ 63]Gǫ_śQclJI&*gu^.pte?=W~\!TjW;?O 3aɏ{P_f@e|sH{#F3DZ-UH߾*su6UmzcSuubs*Qr.|55g롹Wđr䳎qcgDU<2ٮ/XMWg嫓AZHUm rI!c«2)7% 5#2}3lL(@IRc:m Wϒ)bX~XdLM+& #rf */R2^--i<$\ v3}3NI"p:Kt$S{4-Tí<muA><>dq J`w. "3F =xDTnz=Gdvvvo ð73 Q S8G3+!t-GswD{ -ͷL[M آy$_v}e&;ik9;0ZMcotEдР&s:ʎkzg.P3)hu)9=LɰONbI~ 8v-K*bnG3F PbAZY,.Av06"aLj:iYƷ| 3B;@"c)lPG."#^?{W6{"%R!עi84AAQTl7d[~KdKvEcQřgf8CCWE!&b?q "JYfXM |mE |V,&eXa6SW\TmهF]y"qK'ͅ++BN<)aYɪ0ɬd +.Z@U^UZ̗zn"In2겅jqVbE%,{ ecâ40o#kx;b2.ly5QhƂ"dK꤫\g_Ӯ*ĖN+ۘƲF.ői40^Dr< Uifq>h27pl^p a6o}9ha96rB!BkP-5+IտOCubg=eޘ,BwQ.E/SOq k<)KL7Mj|aS& <+ٵX`'l"|Gt$$R[ LP0|kĊ,pɆOs+B[m`tJ |r c]EG68 4ad_i9R:ϞM^KPL,LS)d͗He| qr?rԖnDвXWc\vDgcuJNWGm1 ސCq''-tc}}{(PAIÃ;\d] ܆Nt?wԦ>2(ڿlWYaCB䨓>kMQG⯮'et|p)Nô~-GΤl2}Py_t{V^eCQ(-U@eOrm'+[elz.wixJ9gax6Qr#p %q H_ZA G~!>}m )Ώp|~z&%& ,rynԓE=MYH3Қ7AB/y$:y&)mns:B`shK#A``GckDcɷr>Gc5gI4N &c_:L8@۲e}e^u8F$ 5Me:d۴nyɏR^y^r/yΠgFpwg-kl`% 2K[Kޗ,˄gKV#1mj>Lrly*Ej^ǓNƨQz*ҤV{SEfjwr*V`o|cFWxJ2iFkVL[Ie1f1:6ɮd3?Rdy;vYcx"S)ݚG6W*+h%J%W^ia"K  Kh,+*CwXL9Nt5m'4#vAEֆͰhWjwG8*v"ė)N~<}.Dd lx_xHRO+9Z?{z ~^Ѓv+GCY$lZ@ZM4Q@UmZUˠYBh*lR*ԬΑU < \x^n5ρ)jcpz1F5Gc ڞ*EMdǨzct7 \iY [Gqq7s̕u"#ko}[܍@: U'W3[y*u=\Jeko=_܍ UyDcWe)& }9PEwӂ\|Z Zk0`fV>l = V(\\ [X$H錛%ak6J>ϰk,X3jXC)s]c6WNm96޾XmFgYtd1בsF1_JGPg6}`s\ps@gs0~`ܶ=\G.s>r؟WLŨ7xo-v+l f֦qv^L1e9Wz4@5jRdc_x"$M #/`MҥυUK1+uNHw~/{ÛkADii)0B#=])Dϴ,Cmhn ,*RWP=uĂZ ̠5Cka-(ӃVMIvE\Xu^Tӿg6YA LLcĸq-eJYbCQ$>LZq\[~  8Gz|zşJOLf-Q+PxzV>"HC~ՏA(Ψ bSՀ1J;ߵfB@WbhFZSyuh #Yn0Qr:K fdBH,cg|#@#)dFr(,u`M*Y_z79W s CW}y?[Z'MjR4Z$0ґ®b`RAGSN@ՌFժmՏgerB@ }lL дPNZڤ쎺7f'wbqz6a s .m*&6_C=uvCU␹Ȟ|^f"2W-։zAaG{|_ݠ/䙯bO}5*u)q6hrw5z=fF]o:v^Oi+Q95NrAmJں&`ݪ/B7X|k{jՠ Nyng07R!;~r!;dsOF l^(ڒul648) 0i䩤ޅ]\>&'tLiHϤK{V}{tMDw'ac7 ~d3 {+LnwۇS-*V#.u ޘr>Z?$T1:6!%+ʪ$P2͓laSm>MS * % axqpH'TV8wBp6\TY ?\|}JO#V:UBԩzJ:j6kFn;ސCԳsquD\ d]3ٯnc 1XS} G6~`up"# B鯍\YwE)C`4cIç%b[X0-m+*bT'&G3@"MݖP鵏Z'/BǗk':/}6ta A+w󠩫s"g%:|BOo,Map{YTؿ*[Z .sYmkw"}*=~(BugG_ӫـ0s "TZA|)zBZvӃ^eOý'yuvtDvTCmaxnLu[;A}>tQq7~n?FwO{)iqT|K~azW_'éN KE̗uf#Amp)D>B=&KߏJN,{Wd MA55~33O˻8 }8 'I6Sp|p2__gg@Ż4#|%ӳ?{~d9Uq8}N?ETe}KJ1*zmfZqEUQWt4ՁBC>Qorhp@o8iG|"/0 oFB:ת Z7<<?gUmt!ϯTo&e&K\[,dQOSVƅWК]eoNpvk aS0]5E jCd$ï6#oe6]'\Œ3e%'*8@uhZ.ż4{ދ{{//ދ{{/j2|q [A\ř!6̟_eu*R^ ²0r=;8LNgڜq ? ¡1,`~ՉDJO&N!ҟyғ#W vsG`89n'i(@9SRRdYYib1丗7cg~@uqéN-jSx-Ǘ{\zÀKF/_6/{ 4 3%\E`5LJ C!QΑ   ܠIQ<]G78[]S 꾨:%IY Su@~gkz /_[ԙإBP? 5av-c`偽Bgjȑ_QKr oW]޽ORaf[LjIwN{CI"!2^4*(m.涱 H(SVya뎪P# ]^LΤ#\SٕߠR$2?Qi2 }&{{R# 9 ^YC83.s 5*.M.9u1 JK%Р Mվ(CZN;~H%z5z`k EQӢd0SHSci/a) SV,NY}ujA^[k=OQsEM\c }ҼZ[W]h4r I GȉĨD\P\59/;'sj]IǶ) /̭wz>=TA4 ~ָ^]<PyD"bv_FyM%F`o.{{,.+OM" @6Y>+NSO0VO{oRyLe1irznn*s^ 3(sȌ(GfsN?ocPKy.EV cVUW㘑ڢP4P^94V8`޺^DȣE~fݾ(1ΐ !;p#^E9gLy cLRzkU0n,ԖR#@KD4wq JN{HYJeyJULQPh1 ą.7vl ukx>wYRXN2\26uӣs֩wyչ/e>õJݯ &JoY{`塼A5 A3LfeO00vp_ޠˇJ ^_1YN[έ],F~g)>:ҷ "ؐ}aI@5:Kծջw>J!Hm*gKgQmieuy|{qwܻ_OZMz$#.5eO# g9H4Fh5=(A2M0p󷳿M E]w@WMsJ8[ ΑqartBK&?~X?nʿ{|2E+ԄpAKqrd6C8aY^/x*6#k<.u^?qt=^7} VĘIkUwV>V@HFpl+}GE}Uj~nq̙ɰW[AydΩT_/ J_c2O6LNW(w}^ _; pn\bagup-Yf- C ַz1n趹ϙ'Tz6՚5M=Vil<4[}`A|6 Ϥ/5*jNC3&;CG1?7QVuUuie˧-z(!FuQT_*Zc? 7uNHHU2AѮژLXe55>F3k7bJQ9neT물b+Sxn 2©*ӹ-2\TYѴƛXfMN1onӰ^t5jy9Y]蹴o>1mNWU:^FQskG/S$!g"z!rYwS:)z!򓠒OzlfMˠ8treP MaC# _"jwO^IȨkcԫ˝H9oGry*h}(9Rsh RT9 垝5P=ƳzF>}Yg !GSlvIAuϋ~@$%ϹdQ24'jȲo\Tծ#IU;MEU:4q'7U'4/twN^"svV1SMd]H赢DHZ5S%0wZF;g}Yh7ABpm(P'f݋&rmp\]- #Q`sĕh!8t` Pr., ˼a" DCİ}=]Ԍ!k^Jd")33 ll;`.Us'i&NǓSDȣ1w5ݘ&K>ь쫧.nAծ4J2.Y#a^Kkz40ADLhu,1@+Xeu9=̮_HI\K=.]||^uNg:R xَ6VZ bmwJEErFv%ϖw%CFM%l$F|q󰑖bБ\Z|A$Ňao{eTF{^1*˫+E#ax8a^qP#p< j$g|UY1%>=9vsuB1%FmI\q6W\vru{ HQOdc-F1%k?RZq p V8m%=Ӧ,G-K€\ KIZ:@$ὖ ?mP8wG ^.`LxQdyA92RBSt) RM%*˒[kωeA\H樵)PggNF1&11(Zhm R{qTOiQg,hT5K- OCy(NAQ<[7qa% !y7҄Bi`VŶX̛ ?<\ }}9__#~A}9 s$bPQ 4ōG'["2T93r23=)B hRZGwLj=)'F< nO=ʎ.Q.v0˜θ4U+x; #fj4z^")t`lyҔ$Lnh1CxۦB|V vY!,r %aSPáEaCEBJeZ;05۶w5*k4м_.b'"v]=q3@hS*2Ej c;0ӄ2g$=ۗDC68w`x5u왘0@Hٴg|b\=.YJJh^-*u!}slqQfտ= G] GK<0$8 p:}gg<Ѹd"bSoKWR^xMeYѧQLq |RYUjs[sPij 퐑.'$,[<y$ sQPlfhQs~[sSbN\z$ҡsL) '#&>* sf֡+b^%N}4TCeu??2L5I]qUpܞ.w$jr C< pDaR0-5ﬦ5X ?p{VS[tb9Wb_y/_]7[*L<_BD%:7 ̿5URs{,Q2Ho8|v].mN0+2,̸f|ge2 ?yl&=gi}y |~(|AAf2>)!%4w/ x r} BtdRr[ 'f8c~ʯkO 4"&t i^R' $n @&'FWo8*oa\/hRo01W'ЇɟIl`v=s lp[t"V&K;oB݇tjI/t"Y&vrYCws9Md/.z,rM^awlv+ p\b^Ja2[,SPQ g4m@w 5OQJhMO}Sy<{;nn<1֯ h $oI\ߓ1;8ߍ i\h3V*vnI*C`Ů, ڛmPD\́<̝,7#I T+K0,b; F%R䐔-{$(21 Fd*82#BE)RNRJT^ewj  6eǃG!8ŷcjX۫2\ʆF!דw 7/?/RN f. SK!g)"CHmAe9IZ*TX:/JHb:| "vR\n{SD>~Ӿ8Ø(Ϝ.hj ?18O)ZJ~C_M9?po0rFμ-s?y/>4 yɶdu~[&;O>^9F/zt?>ƻ5Ί\r{2`[8/'^=4N< Ts!l0זJ |_dن+ay`5>ʲD wK)$4i!<KGLg˔ٓEah)oՌ4 4kWo~,4:ԱGASB"9PDGYLF)3hj-ƚܴ,9Ck =0\}` GV*tYZag a{qiVAZhdKpɩgID,VQEsDԨQU32|ҠUk=OQ%^HIB8- }b4wyjnVaFQHpD64P!a̪(ڔEELU͑eh=$Pzf$Yʨs~52L6]ctV` 95p5b֡KÍK0aSBdR=1p6Oj=H0bAY04 (*~'JuZIyVE2kaSScHe\h'}6L%GgӬ;AI'Rg `b1RJYy87DJYv + ԉSĬHzʕa(:7ʱa; p SQ-yq^ xOb UO \gyy9kƨ5\$ fJUo-HS"u x=NKT+^y+_-t(yVNKA056%K CY|;́?p0pi9n 485G V%#a9p,-+x#K3;AK19V}`H<7*7`C)5P7z0E hZC2)h9PgؖG)!3; [0sn>B4pN>w,K:qzDLKjGW4k~(SthIK0d$:TT:v=Jp :K@ ZPq3NF.Ijh0B.Vx̀\)0_w,^#𸗢VEh%f0Y&uJ6Mvr^b0s7TJwD[|~hLΈ 75b~̋ LuN'[s5]ٷfެB@;N̆͘`޿{{f9|6~&,Ho-zx6 &ocm웧geO(,*m=ZIyhУI7C4gՓ2%=PXFP2BfF%J8AqR݌-]XǮC%{p"(wT[2[brTy ~Lf#v*yhB%Vj՜iYJP}FRquBj I֛ 0! 8(5B gN3m#m c\S景 %9%?6e8]f=cWCJ}ZWw-з>^ ir,Z?y۱jYy&U~yaHt{w:A)[c+5使9XG 󿷊Mn^ r=xzuPmgsZD#iWU:%{FuA~u[0B2uǫ)nchW5:8;!)XZN7X%8fݪ -nchWuJQB[xFExMb̈pO~fnBY\굧ն01d摕DV~S UvW:Xam>c訌>4zD`5B'hA׷}D)3m@kG+4`/9k|)3Z>7ziFP{$Nͮ0Rm]ip7NY""COw@N4( ,o*z hݧz }tkˉAz"2n#{9=|[\+Nq<8V6Y8n[*j֚8uJkd{lq*~\ yz`2F'Q3: E%yz}y|KHԴ'fNkT{2mj! y(RlQcH}(nFB@/>@$jR'KIQ6%0yq3).a]LG2dR(Xv F!2Mͥ ̹3&ezӳ8I}oO?0ǃZV]SkWS9!js5c) 6" #@+c[z$eNXϓ)854UVvFj]Y5ΏOiZrkgB'uXO&L; ыIæTW}j@KgZNK4EL[k(=Rt(Ɇ(Ig2=*OF01SUuBt5 6L zc&F0'"ڇͩOK 9pU_? \ZF~r) ʔ פ>//5CCr)7#Vsr޷Jv8<(QoKjl_Om듰B<|*:X!3 xzxY$)5:'^` _+J-4ں?wu%dm{R&lqNǸt\*=).^-f[?︛هH8`tL e?X8F$׊dfDwɨ"iǑQE" zJ*-taRCHO|m&En0`C8"eLGYB(c:zd8;ES_gh>jH<.\tUJThLm5RquG[N\9mŴ$R#|LN\ ZSseuPɦc &l)H'4hnw&SMD.<]D>ɥŏ}>otuTI|r fɜ!Kq]7{k_7ޮ|%RDVЧ)IuskpΚp9a%;aS/yR^4,FY|UO`$pωersuu7ʹQvA?GOʷVE.z,Se0\F c.ϯ6М/ݝ]\9{Q1)lIn蔚8k@֧(XЇBNEvLȔ"BH+H0@Df̸(Tk{Ҽ9Eo{oB<f؊n{ a"onjOWo(0Gml0ϩD`*0Tdne;=/Qn}*QPZbgIkD`)Ǚ?QXRj#1T'洍>ոҴs6U$J.[)/ AŌxtɛ,tZEX$6S&U0nSn%@uf-ɼ=$չSWi/+;x{7T f$)2bp3 JSS,=L*ԃQ*B+alZbڍD\rYGD\gSe{seqՠ!_FTMWG] hb1(:uTng})tԚuuCCr):?ܱnAՉu;*BQ-Zв֭ UtKxI0b'<$s"7A{fxt`{ab򛑙]|2A?YT}\ 7eJY#|Q5 _n9˔PeZ9=+|ⅧDRwnn9üg#z/Xɘ jcц8M\vIxҜK#Y+vg/ Q!k`SATK,c9 \⽁gqfa" A~w?gd1C*+@j1 ]5H`N<0eQ)QBaX(dLQj3If`8Ja$&)b5ixZ{"NK5iI{)Ӟ~np0чdh>$,Lp*4b8-0l/+.vy!o[$/5 sS0"d}__jo<.`+++NNU|g`&ʾpχk[` k{wʯ! Wl?/s n(eT&T{@ #?T^R4QyA *+ 2& LT0с)vxMk+1iwfZ8:6\PF7_lTC7ms{/K6_^#E(tys jS?kвc݋$3qY?k HQ'ڇl8Cg Rpƾh6~: oanLS͠7ttߐl6 z+T%{Y|<\>G0)ZG#B5kLC)92LS͟Ec`:w3WmPŅ:iI1X쮳\ZI0Xl \ |ޓyr}64Nv,΢HP:ε!q(M)þMjBbL-6U`k8ib8lpX/|cza{PMX]\CD+\we!kQ pF%(#u^98A6We6m3V$+0-xHĆϼ#21[0uhn&qa UfJxr*1fDZkBX7M%'%UbiII]# /%h7'OH|w눛Fѱ\X2e,Ē)n,J`$dqҬR`S(i ky?#5NJ<ͪAqRZ7fAiɔPn-D(lRgкQ؝?|2`hJ׏ivnW?ewBvfqз*+XuÏ|RXː T|A 0FN]fXҲFX~;CNs ciϢ.shscaPԙNN@TSɤiv-Fpf(I&T2 pMG$my0LiK9KK=|(viV=ta\t @x9gaZSAK%x`ue'Aƍ[S]-źJ⚭ݰ#;oaҙj]Xe^ԅ{ݚsЬj{.Xe+/<6eIZQU|i*ppIZ -XA3wS)|2R4 AG//.8g`rhnʨGSw ={߻̤8A.ehٱ;x R0ÐK*?v{<~bq8~UCǯU?wj%qW ޱUӭu{/Ϩ抡3; PjPT撨Y޾\|5*Gh}ݘB3׫)\x\0,\K2/څkXj]f1yz[>M2;՜n)d׭c+ΫRaeqƝД*V+ب + yVVkI }v #K{DC Gg;d(J )9]=]'%'m,YH)lםFJBlӢµRK¤Iy$ePR8wnUGO !`j)s`2` fL櫾aD(<?y9G:D tK"+=IɅP!'͙' DIs%OKfFi,A\(q*T_ȃ R]G# 6^ۿ3KT8WQE;Vj_6 ;t9B1x82Kf4XrI0&x&\eHLЦ{/cEʂw!i&b $Qz՛&-LxQ[+q2CHb%%1!y9Rc']ŊbIp-5$:/}ja!c%͍a;A bb J74 H +~zs/!2mrsP찴  ] zS;QuV3Gz;]j&ޝSDE7~jp,e8"2@D")jhp ):.ά5K_*^Z/7> !9Jw~~?źHlH/:Ѵ^rZ\%,"XLenf]_vy# UH(88uq1P=p_\ՉPZI"8[[K4qQ`<_㧞ɦ!p35G4#^Ǜt?5G+4"$݄)0 p7!uᅥ/}0s?~n|~hGjOq_.x~´.?z}\P|;y{::}ϯ>f/ ޽O&棿]?H<8WP pKyŭΨpHቔfL* Rл1S =k7EC,H{sv?`ll$K?bK3l ɨ*zH&(3ZvȝB x;囊 +Z&Sym:w luK0G3 ,+T_M#!eoٽp ~}_񳡟V~jýu Ӑ]utRM[' <黟ǀ: 'Jg<B6<+Cױ9\)O80+N7P> _V"'$owM*^/y Q޵"IGhs(y w:4i&QWo)Qo<5yB/CK0}Հ`;*- B:(xYrToH,EE+  H_sM`F9fyހt3 g-=aڥ) `BP/,R6ŷGŊ6a0e YID;5k#gOS̏@r΁\M~^9{:%wpzȭ~Y|HLk,}Ko5G;m5E !o3YJ/u1u` " &J$ G}0{>?*fXɼl{;O`MuÚvw,2KZ#O6̾§SӂOޚINxvk<E_QO j"-R"bֈYL;Ij_$ѐ mq7'+@yXo.eCh`,z$z[_FW}|_fw-@WTO*:l֯ EH/B=<#q8DO7(;^-S͇Y*Q~״}'_֗L+ nCl } s+w~ًcv2sT?OCCI\g&(&@Ghd$ -i;Z%ɑIЙl84Lm[EQc*y*H}˂XeA}Y黁 NH 1&'D놫NƎTB9P(ZHւ:;p63 35hgDJFdbpA -ס-R.1jeV"%E)@̈́@?mCnGGJbJna"U|u%ogS۫BA9ϕUg;y:?idy?.a.cmfw?.UDW҂5Ӄ.D>3o=™D8maRqe#F9XX/lhCq=8R\-@G-):r#)f3閍htkCq)I'ّn$E]P urQG]τ5U)߆[6ҭ Y4^ؑnD/(ri:#.5Q㳲c5h'~]O:$|:Hq۲Y+jjHV9$1{ao+/>jo·$ 0å4$?=L^<&#ˊNKGDH-$m I\3V : )[i)3?K+H %c)fN3/Ly01!FdG!B]_pq)zDAkV5b NNT])S-tV"IRIrb $S]}hF"w\<|d">r@Q̗#o,yo5Yp5U?'˶p2ދW/VlUm5ϫlVP{o49lq{&]:Vf-UYc7t72^=yܰm|/F; (e4ޓcįਵG|Pxi@Rsb6Ӵ=:|7l!3Ջ_ qV(y6l-l5+TmKy,#mC!EUlxve6xms2a |i#/SND *h'ItxC햘<@Fr,ë_Bbܱ!+/@?&i"Ft(=R?w"įA CÏ ͘M&R>bHڧIɡ ]pqt(ܟƕ75tk:kT&̮dw_2@`t<ᢏχV ڊ۹ib:2c/5КIݑ@5)Q``EVAAZB-rLB(&5OdGel[8 -i-w)h"Wm|k+H0naOgBՋrW&~`y·cT;xܜ,R.ZExﮍ!VĦnSxcdkmSXGS7ֆwj>>lc1JS;S.5tq<]K̃tT\ݖD&_`~_X3՚X6#VO5! mgƜ jP] ss? WN my+q wwfqgfw+Iܴ?BI!f;*V1ntAp9Cm! /L>H}Qҡ·}?ppU;@pG޿-k@ęQؓ˷rҡĊӼ7bn>uv Xu#cHEw~Jy4*j3 wtSO b2xq@M/j)q%#EIAR V x2ӡ)n: #yW4Ōv$a3.&FCikLdR ð,9:r֎wr8pI~`>_6w e'IkOB b0j`2/4sǴC@ Bp+Y +f5?~3Uu RHsE`$13.`#:=IGrtfW#yu`Hc{FnCqjvuZ+3Xq,LNt@F〉" Uk$}8|>zKgo evgQR#*t']sI u+2.yWgv_#nfiuDap>_ ݻ,WIǝ|m^ T tStjDaJ@yK p -I-s-UÎ&Tti\㥾ρ_c`+bFcAR_G9D(tSuH.dJE;Y;dC9e#)cL =d:#~7f'Utml\X}yog# ^gOU9IX)H0:4г/Wv%/8lXvUk䒥i7;G#Qt$$AX]8Xm}ܗ;x"*nbsHЯ}>fK\KMZ90ޮ\˗mA_[~"m Nfa1K"JvEΐDYP"mH<9s<k{vj$ !\[̈́dE噩6uG%ϔ(hQ lVz0lUoLu oZ!^s,/⦱V_['B8$7 :rP/]S* kRb:=3|EZ0s8)Nh*WVSh}wVBpa[rr-2OCFFIAqA:C-p؎ j`재0@Z^!"Zk?@M$g^;I- / }j@Nfу'g(:FF磩~r 5o|J10T4MrMԻ}AK%D`ajuO>;\Mƫ9 S,d8n"y?W6 %oZ|_r.(V+j~/ K0MUGS~ %Ȗ<(݁CjM#{K}Ѱbx= W#,F[ktKLv *!'l,7V  nvu g蝔r9x6/@SHȡ f :X*߱Sv©D@Z`OEjDg 0 "h#g_&hrX`d:oC-xjAj5~{R dZP|$Rc:ê^2m̬؊іeXq|DgVVuր Ȑ|΢]Qp8{A !2[{OxlJ(绤>L'_@iTgnցQp $]d:bN,7^L=4IǍs>k95[kS2^is@RG6A!ÄjG!W%O+$d".O_?h{+=UP@[.yÉsf,J.^hU@H0.26{,79-8=̪`QJcIuD Ψ\о1T$m=>p\9 jeؾ%6m0$޵ݭjF D >az&{@pPŔoo)mXz79ɊbKyFXq$,VPaǞC %ү0S-]kOg ?FA |j)AW0:I_C}ż =^?Mߣh6tLJWA-羺 t__rbP |aV믗4BSi]ߖaa?{:7̥%pHVjE4e0KлŗCHwC!3Ǔwd]PHV\=tPΊh~08]L Vnđ!uit+g˛,U DA铚Dz V/|} i2,\mzPH$r+xEkP%;r=\ܜ=Iˎ5&&^n&jghktxR#:`G<'0%X3"A`JP^c6qg.ң4pD"q}: *HQeSoϷឳKKTyp35W͍LǴ"mT1^u ot} L"5^|zܰ{dgfz줼omwZ1:4y+'`27eNv,䛯p|װKn}2tt+1]ORvwlMjN~1q6wSX=z`*e'+%ǠXRѲE}fۨ^UU>)dܞc)>-C UoVU4C5gje6>*8ze%>N.YϲEAW7 H}㔱%(Ck) VڛR䯤/gS;JgAg>92C, J%Ɍɦ;AJp9b}U,!$A|Fck̎GZOjɍ&Q9ٍf?,殑KZʞtl8ڼϯ@FXBW OKUrh4\~ d8D ?/ h/Px<^ ]$$3vſFQ|W7QFOT?o܂-+0׃?gsN(`8xt݇\;#h` ⌀#"ߠ_?_u~ۛmgE[oD _VA ~~)IS\Y:o3̨t P@.B2B7#$, l+߽cSIg{|Ϋ)!{:-^vѮn*6XzrL`?: 1{r h=>.a6'$H0|eG,FoӍsc7 {٫h s +1ܵ2ƞ7U_Z^|^z~ 2;1,P[Nrx!HYw#&&?Nc(l;Y~]zczJ-xy;*Hm`P9^`J1%J"J .(uȪ9LPUVGclm.y ^9^M[@8W5"*҄ltSS5^㑖!6:MC5C׼)}3 %BՠD s8/%% +W󪶩|^@xJCBlWB{%Nj^pu? :,%nq۹뫁̨ @)///S:—!ȥ ; @GW0a[?bGNsm{~5PX$WhB =İ_1RXpI)/psY"2ҡ,Rc/ G ﲅXvŪ=~+ sCG!Bv$Q#\? BD іr{ BtQ qkE@NBI$A |MZ4 \@DZf:*|h&hZR 4-oU`-paKPht$hNo_ 7K d;rh-9!*B_Q] Jpڥ'8.q<\1t“ZZak$&4'5 ꮵHsdv=kn>zB{ jFLB} 5nC_+ =+pDؓ BII] ]O g!Z7MjQfwkZLEFSk3~N[c9Ni?M{j;8M $+udA.hl5$]hD>~؎N5 ߾\VQʪv+aȱ1U]RcSuT%+2 0-CYw 9]> bX  Wʼ{_v`b*/A h DeO.8G%cbL]iBř"P Sq P4#C!|twҷvtȧ85C/D =sѼ Z%X CV Dk0*qd{ h$jU8q(g՛ARPV|#DB ޤ"FߍkA4%oNك`4[jZZ\ xg>74γo7?BP䞙YsG^ࣤKo9AmVjR㳒PPE]!<{*Cn՟x[Ii ] p_k5_"JOBH0(xa!` e1,|!:^^v5~H)ܹu[GvϗMUj?zK+B}Q{CLvIR_bv:!(` 1ւbO.Ќ=¥I(VLj,[񅑖|9(47Q͟pc͌c\-Tt1hg*qwR͆A FK;s5kC2m6;;tmI Agdf5@q4P>$]_fVVU2iT=9Jx˻ͷֳ x>}L 4 'nz‘*0uNwV3sDKG ;H$>uuqv=S*pf#IDpSL4@ I)DwX09`p.PC0RBץJ;I鎽]At] }!F.6)T/֑]W 'k׈>sFX ^] jχ7.J.=}F`xy@.LbP0"J@4I'JFKHq  k%( 4đ9%(92XfǘT*E_Eu)O 5Z1ቚ|+)ۢ)VL#!ɼrgʟwYhJ>, "1ܲj yY:Աs3ujHχq/f c"Aw~ >r;d|2_}7Cow;pa4?gb/OO0Lrl=xh63g0`X MZC5=0r%dCuNJ;V\b%O̒f7sl2e1I7L?I:e Z&Wg:ލ5wK)1a\߱!<7@8+awUnZ(*举ʪآ- c V"$qgB!ﵒo!8Vx(mTy|kmٵ23 ZCekHpZgڻl4\'s-|o|pLNWI|ܱf6O()cVx*"Ì6i iUR lwNe}„Kٿߝ=Sn6̧E81hf~Uҡb `J ue'+^tϟև|i9R+N*O`5t 6hrEGA1Zn4VߵD ^5la/rģ|oSs%j\b\<+$D{oBg:HʹG@nH?Xl}]s.EDJ&=7I?O޿{|X0q |1i?> ""@$ذ Ck! 2r3Oq63 Vl3]q痧'o&xhn9\6_<|G1IN)!åG#\嘩k˸ڛvKڛWkTgh"ֲ(c)TˏiV@⸾%٠,h1YŅJSFK^E&PK&B`\ᩖZã7qfba3+ .q sa)[o0[0 wF'ճLA!"J!*7c.?{V]V`v{^ث;BsΐE(^3]r6Lw;H$]@ .c2LZUvcm^̺x"m]7T /Qqb(Ɯ#1 -kKF 4ULP4E8#E1vyx=%ӥƭ૘|q **+%x{j;]κV(fsYK$H~%< 6/{1)hدq5ȉN%~A;D0 r 8&D_3N"E4/tT_f bX +.{ZYdW\lpȿMv S$I+(NEBtµVP4i'a&֚$:N.X`J/I rjEL0QKͥM0R8KRp(n,t܋7S#BۂNTvؓCI$:SD$&瑵M_ŐRɝyFO%dgr$|+_R)xX PEjvqDJ <` JZ/Gbcg#uw9x0i@L11HqG8eyefʀ=m)a1h6)#ʱ, RfKP( Bf2M~'&s:+dM@̥`dD#_ ch:vr+bU \kZP o+%G҄w)͡}]5D$RL7VR]de։Ƙ.C8\YKͥW0v*jp3@§{(5 R BbE%+mHbv伏an WȫԄ%QMRm ,RTje1llR"##" Mi{nkGP1tG{s4ZʶoHR}ccQ}൞m0ExsmNMĆ*q.ɲ&bChu7 Z_!QnqDe$]4\sV5KegӃYs|3}cTnFtr;ZzG9.tbzmߓWF>|v?,[ = ?g l\:|j>bq 9bqA/$̪͇É a5+eBd ~UkQV# +I,\kؖ83g bOSqo/ӾRBM#U~jHQ3C"01jtA)0dgëNOt%ztlHF/䚶qX]D5X| xǞe\J BֱXo1{HEڊDբx+7U(Dx{ 0o%fk" x`1tq{|euf/ "R.fLҵ)?yԀ_y`k}uMd4Bk{uÕXKD8⓸Ȣ~Z_LSDS(B2j <^l[f:'h@`x9u9no?"3\EfpUg t<)< ), A F cƤ n-F6+}IZOq5o1N^/(;Ћ2!9忖a3Z{?剤X]R/@Q= lsi`)/(6HnBK8VTɃVd@ !Z ne5kvˣB}1J-%tYЭl/+=z ,p "DtJd DFg/N1fM<%.A$֚o?i?XXKT)EodbJQH۬c ,3|0(ٖ~G/i@`>B"g4;ikG2֍ʄv}1 7ހ_}XqV6F=ZFk'WBZwm]!Dy[Z, UhCX3((dQ:g~<˷BB'X&j0 lPH^jl|}WN~$˖?G**O{tUWK"W1Z.b9.+kћ<߿(̏] cH=x$k3YۑgJ!EDXb1iQgݣ̟*6#jk9L-SU`Q_DRzJum=(=d 8EoH{MZI:#uߦId3ƀI _Fk>owޖx(ƛ+ʛ"Կ"^sL%dY7|KjDO{<a6$~NI)6\r!IgAsGx;X$5ﯯɼ)1|f>"CI=@M`FP'{!CS`2'd &?X?$3Ɛ9tz#NFw_mi ߚ[<"^<1-~cp3"Lˠr,{lfݥo$%tIr *y_a4vUX%_b)i\v=(RSeug _吀^>k hո9eѐtZKL4tsDd)XhKH\*ICdWxj1p;yVn{#e̚N!0LŐ_`Ɂa{EvOaJ#DZ9ּ?ɯer鞓+R \2YTVq89eɒx !7 /1&;GzO7 6#3qDF"Ld<5*K\0/Ĕ)aDD>c %jP٥[@*)$[ŕ+j1&9yАSat,-И킣bP3\R"3c8 'A92][4_ B= ye]oWQk/j̪W| }Syƌ()m?y1*|i|PuyyEoMo77f6jLdI8]s`AףۗR^ >T^,[~G2xQ$4 51O7+?pnlYN'eQ$؅Wn3}{2*KG>|v?,{g>yΦA,dW{ T_䎱y\Ћr? ;l FLkRk]$-*~~b̍FkR‹rշ:kP.-qg΢9^i^]) ਞ g,e,Xo*(bII7\YJ8XVF?ռx#BK쒪^ $ {|Bs$x+[|@^ʆ%uD'X;FčUT/yB=#75? w/j` 3SXM_7:%i4gnuQu[}܋~tώboMW0C||g.~Hj9ގ4XÖ-e6_(Vɖ#-SC~mS@ ɔ(a]yR8iBlŔ#i ]-z[eHaKcЧƉ`C'S,BZg]6/Ne<0G ӄN%_ o %Uh1:¾І3@0م}1C.Udߥ Rw5k)K\NWudcְ-]VmG9D/6¥XÕSH%51o] n6a3x٪Fih&$[`mS^w}QW~3:jԆ(nH9۱ƨ4sI#QY=D_5zPdrw fm km}LD){og廙/Yt3̢YXKϺ7ޕW=q]SUqFacv58/Wekm 9  OW_"l1evj/cPl& P/l 's wUsx73M) }KdqǬƎz,&c BR P6%ZaCJ^R*W(#_@~EyK/9Mefz7] sES-unDRHf/7*ч7Xv=wpr=>U*B@ QH0bSc`9d=8|:nuc&qŹ˳t 9&. )j% :U<7Etȋ׾jz˷e'G.DyOT{\-@O@SyV! Ď%G'"x/q}8 gJI ,hthAy2#RCD?6*ЊH:jb~ ^" ̐]ty@$Q# JiStSUTԬ1,)KgR(4 N@ '&"1DxH_Put ҫ -[bTDNb5)kdT/+"АҐoRsF# ͅ9_qicgf$*pEߗ_?g(`^df̀hwǙHlx{5/dF\(I'El1z՘%?j׻("R*Jf@ǎj=vA: *eFV7V%ox6m)FiMX@gP9A3C[ؘ)%9a,$ yuoE ^_1gMec#-Ejm[^BFpd\/3)bцwvqK}U9݆w'1[u߄"D]Ր)zY6Ã!º*}wZF쪠ݿj]3i -S :|q=(}P){}mx"!zɻ/ێ'7/B&'Q4w\Ū je]A )8K̒Lk.Q*靍āOHD\+-_~k51UPKoh@+M^AQ'06*ǝgd9$kSo`:o#(J*$&WUU–*`"x2i O & $~X/9CXMo54䂾HؖڳXEF.ڨvsRѹ҂ <Չ $6{gm$gQ(an 8C1E0 D1.%J4`@f!ZOh[Xhl[P-5H2x4\h೐&yzHS܂bj"d i&=ZfE"dF>G^[@_CP4 +&7*`GGaFi}Da[#I&qԸ_ʥn #LsPo,%]dt&b}3}8N#v~05&lܜ'?_53FPsxwv˚6c(ዿ(!ՠ\?8>DHˏa%rsFsJtwqh{\f#[⫏hcQB7 ?(朶 ӄPԄhh +p1Cu_ 8ԅNӄ!HWԴL+ rM^%a2&-d+ƒͭܙ(IL\3$kr*2 uX RDB7&x0D aeV;<YUc^֐ݘ3!c=UIX5#?[b3"'04RY[lF$0@fR"E;3]LB+fD \mʠ_]_mF.$<;:&~//Huo.,UT>D~}'<}L%{zvwGwWwG5ގ~=8pxIg'|S{6Wߘ>67hszƗi.ף"&;[-4v~=${5Eny2fZqƀ“*f7ݚ'ρ)ܝ!'9kryٜhEW"ki΍u*%^C"s *XX8PdpJrYCBƒE&^Ĵcx &@%>ND jC2J^G YBqo2q KhƸb=Fa5dJ \ I^6!f D-S`{Cz$o yHϜl%Wh#t'D5jCEHT‚5tsrCzUJüLkɻgYF^̓ך)#1 vy6³yhz>vK%}~\];FAY'(̛v8P~T r)R42ooG%0B?/[T̲@Tu_?rܢ0:~Γ?SG$!L+O/Fkm黬W=I;Qxa:_?GC95u.=3S݃)EGRB rg& lwJ*@MZ@q|omXr^R?NlCɴ>}g6:A @p31Lmʛ6<㹶87jמ3]~A7{4n{iܒJ7qmybz=*P< .rul#lڃGe]q7n<(TwOǗݣ]R\scry<4 4 u"q?Z\ٌ֫yzHԳe,5 ?Oy>;|t9]{'ZeTBPT )B'ݥ%f˦j5 mfBQS HY?t?!޿DYN#( TGLNO| )e.dd M2P5|ۥcjct李RYǬOE7IˌKg-'sM,=:^ XNˈD|گo=y)cC/XB  s4!&K(|痒aȈbHuTH &X% DME+3dfL&ɴ(!9ucAjTsvjGi{t"">g@^1)<'H!ѠUPIcޖHT (kEZ PW@Ԛ*򺴤D?#>@ j=<9)q=6w"S3Hٻ7n%WyٓEzK?d &5 vl|z5Okj}3;+z{1ʁ$&.aFr%2 (ȹÆʡ{5 cI+3gu7z/\uCbdKtS!C.pmr:fۻ,[UiO¾)Fz,Tv0 qz,6> ůwbim76-v ~47݇|A_};TfwdaB80FQ 854P֏8Q(|#vS&6LЁ-ǃ*J#KwNnRDctBH^9kdlTSrkN!=q)G`r#vN>DJ mIwUMVc73{wei[{Sk `8zCI tdvO @0j O $SS7 NF!o7'~(ʃpi8w??Rw8׈ ΞENxUƂzV(tw;Y,o?Vvm;t`w*˫{Mm7 iym{gQ1p#MYwX͛S=fӲ_ȶ/rsYi*C" >Rv;>G`Z՟]+Z`>yA ֎Χx"wg&mjawğ֐bO4T08/d X%LbLykgc&km+U竇{ׯvܻݮy ـ6PFe Fd=߹%fmuCYh}ߒ'/(-^-_ovG76?k#3 Έ`88G/ 3yQ" dWcs|ݣ @w+݊I݋6&%ɍV }{pڬڋ)i3('ф%=*Fݮ׿I&ZE㞕6x" *Gq, ޕFq. 8 H!q"`{YEj|3TKFcTɧwԓ ZqQ'o2GX>фg‡mAo=[)%@&<=`! I t-9'DZ􂭸`P[zpﷅ )Qx *dDC /29/sX 3$ _zAW孢gf9@)-J*d$6LXL[K.~cq?Ϟl7묹rMw]V[U$?vā0qFfBK)Km*կ_6l@+(Ҋk&"@҂Z;-p)JNhY @p.y~,sc[8V5"0ʉP+4j M(AN\ Ti4Xy") 8Qq_6Um\igKUq,V0im *D@6eKG)RvfE*&v]( /QeŇw 2̔BI&4 4|l#R,r+Bww5]{VMq Rq9eYC"7e{ҲG\39UȘS*.\/돦IV0%Ī0P+ !s*JXAXYDz?!j;1(0bY`E# K']3udaO,Hp RexUɓɁ=hK#V9 1èr*$pVXv.Kxu:ݠe$xxS2U˕rnw->l59}y ]?XiJ0w:RkA_6,N+}%8p \ 3# jJ~K! wSxİcB!]wŶ{aN ¬|1S„UG:y >{Q/Ǒ~~@>gn'va3F7r흜, N e/_nςy wHE(#K#%bbx TF f=AHݻ.b!;%[ &X/;I =v a׮T]U cAs+p9!( U02fls2 !D֢*3T@ɉ$JB hO9X2&{iM5AO-q{3 FaB\ؒf/f˷QG "GQ vIF'`' ` U7JW3rYC=%g+%xWScu6|~cqZgz>UYnC|6ohPAgֿ/,~zK @g9>1\ ^H71hempy7V_VⳉY $MdaR9m'ߵ2hE:حXԜyC  /RPW!!@ LØ'E<(L~t?Ovt?8ʱa}80d!C2ด*֏j9OPk(P $dJ>J|NF휿CKN\vTfN-;D'\Hlۃ)J-b18 w[j؛~lwA` r걚90 CG7Ia`᳡+ˤ2΋a_.a®anO P7&Ǥpœ8kJO1B"6X$h3xIስt")RU|åFoB@eQ=;PWwqȵ$Sdh,t.9qGT2x7<[K[2<4 BAτ?1~?f*39S@g-EȽDF 됑}-ϬQ?5 m\2Gvݴt_3{A. lZޯSht\"+X9 ˋCad. >@?KDY?;,EAJK ˳ G=jr>rb!rsً!/ N{P9wZ\"'wm^kOW@N$kgd3]"+ˎ6Ze<|zḾ̻Mg)RAae` C|ɐaPГu( SeUK(&:g@|'02k~[=b:f%82|8$bFϳH9Sz;=k(Ҹ.!Ch]$yE/y00$Y0#|q'% >$@S*ps5 Y 8}G <>8Ԛ4y8=Kaq1JbgKm$3čvNAOSyG:~ȔZ(t7cslTh|kEc2Ú#s(SW4G `$%㓙4 ],H S+.qc?7O;t@Tux "?ZN@1MN:ڬOvh>Ìj@85I[PŽҞI!Ow!d&_UF|ǻbIDMşż??Z;{\.ߧʽUtrRNgvyuuccw?w2dyo+BaiLF8Y.i\P5jTݠk[sȸ$4vixCrMՕhĎ~滉kYMgv*)ӛW`+p_19jK# [JFe(ŔD^0,\ yt" AP{)1aKp.C3@S,K"!Q]*Ix &R`$y4jY:N('CBKk()419&sRєC dhz\Ќ0%Ī0vm怺~.̩(5b@Ff9hl>|tʅnyeX>PáqxW ^Oo۫~-l ]WoZُEʿ GAIp/XW#IF4 "*fΐ D6GxF&_.=WE`֠ Nd$B)ci72eH-#ZA /$Q@dO :IK;bcPI!fH-cg U^ <@e]@`8x }QEɴA:aV kdi/&Lj Eb3 OHFxP0I (4VrE.CjԚ])1l>-dtYLY?I:?H0&c~MRvʇm4W Q ~w,%|->zNcvQrSz! ӀQV/OloS^RÍd|xU*,?M93~ȻcMb*Ox-سRZƜYMah}Far1hB{vL:U|k-{M[ ylګMNۡwŠ urﱎw]$֦[~wk!Oɭx7LkRлbЄ:Xǻ.Ϛt*ּ[~)wk!OYdsMՎݥwĠu2ﱖwY=&oݲUz))MVz/b\ P'xmA#-{M[ y{ $ AT=֪#Tn/vhi yTm_׾ x7؋i& RXf>SB@L[`h0V{[ݏi&{2](Tl/vhi y└{߃ݤ܋~\ P'xS͛=Цޭ 3@jQ j r,.$ڴ5?1q1w5V-)rx5f1?^Yҝ՘[U?`j̊k՘s՘F1w5V-AjZ%՘s 7eYKsWcn$Wcf1՘s҇WcP[w5܆%!\!-eWcjmZF0b1?3pJ{>WWcnRǰ0$sWcn`WcN՘۴J՘{FLWcnĦ7wXc?՘[~p5fǰ#"՘D+,,EjO, >Дv5ܪ%hu5fXwVvWc%ȆGzq`}WO _>L@?H)"0%,] ,d4H-=El)z)!`¢!ԡGPM]TDa 11,O P ʂ@+`: *Szy^FU&eLЫ 8+GW2]mk9B[8MpDo,wjf^F#+} aZU`|1&9M]` w.J%EF^*q kX046xyco?eG7#9&LR$HL9Lac9CsH[쐡GW6,vjgL9,F"=ޖNj{X$1*%y֟qFasiYd2R$~QЩH)`㆒q̄|*"C2%;:uQW&z/(b! Z)岐$ R8)s Y132E"@ȅ9/B@hM+T)RRM[..|bfpߨWח9Qh ߼FN>0.Koއ%\Sڛv?)Wd:[?RAjl KkG0E" Ÿlj1S>c1?onFW{Зh^/&fz_^<%`rcAд7ԮWiycIn5xWұdRjKcaH%{~/~\mfEF=GYí ! &GOf%DvuLI&A/vᎆ`\19+gKzu'mĮ/FiXK/sXhPc~԰>$#I ^ #o?Jgs"IOb\|UCԟE8 I-q v 7`?\\[}S@16.ԙ,Txx]c||6Nz?ԋl!9E0EW37.~ 8rKSv?䀬k%[x1Jnu55W&ŸX5!jn=`HK42MLR7a  A3m.]W,*_59IB?䦛 >K`j-dRcJcKft>pޟ&*?Wuxy6`&E5ĝkX,V.0X i seLo10O(uY@f FE奋z;t= dǩh}TbB@ fo>p-;8 ?3\WNk‡} nZ+ /a(?b04EL;ӟCvx7<$}?_‹?:'MS֮}#URrǟYZe:l KwS\}a6U 4L[^^Y`> ؏c!ǪʿC.cmW G39}94`WsH|;n[gDLه@(r[p##X,sasd4[k ??u->,Ɨ4zJM 綮l"$NYD|rfa˫WiMH!8iCm`>ào 5o NoG[]s}|/wmmmZs;BNB(6[% G^U!FŠb%2: fk>KZ3k؆0`l^*㣷IQ͈A!R)sZx,& Jb )GXLN5X+uqߜꏝ jݙS Y+8a  W[Ϥ5$ؠeӔe28h}E.Mez1dtY,sEJkC$tSĩ6ibdV=Ik_&Ss 㟆`h'{?Ox7Ikͮ;n4N=n;dd{)ewk!OYtňsŻwŠ urﱎw*B!CԻ'N{ 9~Ee-!}$A>~:=]' Z dTa.BgOH;mRBSjDEr??! IX܍.М/F7sxAgz+wz2[§gVkTQ?c>I2\޶.\/^~EWmp. #Ã|f>~vx^^]~pbsjbwU߽>iJ#{"q*Z=I?~Nh_iXiT|/` 䄣EJ-^,&Ӧ|M`IHSߢKkބHӋKv]Q,kaU{}NwK)\EX=_lJLt;Z|#-veMj4EK{gJWgWXލUmj!ղ:?M rkh{_{;i&cG["6=@Z?gJ"<2NaB: @&0NT!Esp,D3"MPք@o"WwZ4Bk[-1;iwFE12小)ҙ!H ZQzV Ed/eP g&IS+(/?;iQz=1&Qz Vyv@X;N&/Ԗi/8XT$JUį޿XfHH>Pml^A#DiHOSvsf6+&y?d:/>][(u(N RjYGL1ƌު m-`9TvZ²ӫcd(U@(?<4\iEWKT^|A}-ӱ5 NX`)3z~u- c ߇r)YX?МpR=lxSR3σٔzxWI͸-UlA\jo@g׶]?;OڶMe.tDz[E&*+?^Q_psVu:6gnvL!RU,AQRzq^͊C7dS()JqLC;}A5=}(AJ܀j|J_S%xOa-u:t~BXB$122hV vvv~Pa$-IZ bؼͧW,55g0^1&pj,7(lb'' 9Ƒ+KPgϳѺw*vtѦp$C^Y([Exe^ͅ&T淓[shRV?omlIM&qW9qZdsW^hzn.SWB4|zBҤEa sZ!hTRx'PԈ[] rĴRRRQYI1B ٍQ6~Ta +ሏrP*CǸ~m!;g39b[t.tv}9Xc!hKe!g]KRBa栝ːԴ:T*>(Ršw PĜ؀^ zuӹa,AP>vɃK ~$T":J؎$RcIo2IERa@jTpBAk Ma"xԈn5baA1ܛӊrˑE-9ʣ)MvPּ_9c;OJ:^x\)aF(y RFBf {b 5cU*v{ eqQ<Ӛ srDijh( ΫhaZqI/F+86*]lM$Y=a.K 䁘#arm #(Cy~_p&8`Ɲ!*uq4L D[Ii m ak"$l-0'cIlxxTP`428c8harF`GZ9(t5bf곱Yy~_2+r:ƒ#*zϬi7uoŝ{Ds3[Vʽs_Uo/ޝU=:? F1S$:[i7*ã% qQ_!+kV9?[=`,+XP7,w1@zV0i\-4eHH!ɸ<*x>[=լ6,QWa ABAUzv4Jw\pЯ-+3cĸN&jraL1; D8חsJOx'*ΙסR qP  Z/g FP=vXmp; BVb~ȂGrQk* e5Fax`Ž`*vW4ADE*M:Βּ Am*u&q" ӟPꔐ;;Ɏtn\[լ<ڂ:EO9LHuJţ4,L= =`!c9[[ki/WXTCE}pY5lOrssF~Vqw4-p򂿑o;aZ#w'p&4џtbBHr հZYQB]a6iBn?N$%eyN2.dǁAZq~X62ɼdzmwVB#vdB$ gӻ`# Pt4>v9MpaOASHU1YVG#4TqeMWnp \& c5ә1YkM&4+r4B]q Z=n;g@-tMcmqpZ86 {Z煻 _G߬*98$\GRYMm?^R!34ZIi4Y _+{3,_o3~&)g2~&)gRټͧW WE.FPk`mCt;7BPva μebK畩s<ϯVVQ@gW%FCx7(Fbs6Tmwh[1q^v'TYZos&)Wjr&)WjRϕ f{;3"p$@p%2`4A(OmχEt"a/|aDjq%'*Jr]:jW8 )m |@+o#G#xCYP`9#l7%ZK|kG)rH pJ*R8-LJQoF"}+L0I$ PiGSU^PLT1Py% ui錋ҁDtEZI{E.[C[$7T! ü`)-UH P\9dX1>rn{XJ f.g9_h".ܮm.ܬr}]eMvp!6Os.A/)G; Y#b9Q^]ۼg3x8aEY%|q4b9u5>J枱3m50T})#[R`ըm:][\zgqЦ w"Mw:n! ιʼn V^^8X*b~RΒ"a}QYdT;UuVMb:y~K9NYqYp=^SA1F9?BSX_m;5W<^N39%.[V41w7l'{akKeҡB`\Lę{$U$ZOFj=jG+_`'/Nl _>;; Kwz'\(ĎΤA)AxױUMz2Ҿz&Ƥ̫[ ʤN268wҾ*!>u.F Q$"/-.C#*LRIe"csbJU;ƸCxZHr}r#zpԳr"@j!xـT Ht$yl71/k`6JZӇի-$\;0z/~c+08_u W%Η.9~wWR"bnߵz텹ưXfŪ%uo~z0fx_σ?7ū }2dy鏟.΂ <2ǃ2YC ͍TzFa$X,הSs;a|z&lD"S ^duj?D ۳#_>$/Zjf`Em_}r~?|~ݷǯmhz +DZ0o+5}e?x%J{rSk/]WY/]bxFrbP16_,7᧋U^wmYcv3#ujxu"qfXB67GSlYM6Y d{guzykHLsc=?AwFgWm귌E?L&ó'9:[\!ήF o]e:? _?B?f_˗a9#-?jBչh%J[KD`q₊Y'VR⭦(FJaośo/ߤ>??lߞY.k={f|>&1'ם^g5 獛 [k?ڀ@wyzїt'Os~4їt{ ;;z+};z\%jOYtz=enO˖yoכ ca__Γe_ɇ' قejCS鉳_6{jh043͐ qcX<5\q`Ϳ P*$L ɭSrAݎr* ^FϤ(-ti /}5ƿwC69"1]3Cx#t)z,~˒:VĻN6^C^Ts5XZC+ _ƘK[CyC a&3MVMƬ)䡬~k\8M 984U sIvîruq9 !Y4Q8猔 !pϙsS1Jyd8VtL֌BZ=5EBbࣕJ1@ 7`|p>6Aw+b 8(xHV8$Cn^Yki Z}L?ŋ(&PnD@F_^_[ m'_7¥r 89KE "Z'j -}?Of\k?e6ReˇI(;B#l$̡l!L)S$n5'X3$FB5L}h*ͪ) xvs-"ZyYi ![]KҗLLF+#Q۵rY> .tG-R0J;Hi]&S#F2BAL)9"?i25mI3 1Q Ry(ž}YΠ2* fwrfΙL,9SV)8iI8 0ۀ3!|!:)$k6mKX0R3̙3W3`-ԂS9^(x3!HT))Ӛ&~@pY3(`q؜3g-kSr9SVLFKxHh`Z6Sы!6P&?VQ nȠJ|chA,զȫbR ;#V$sjz[![{H5bWXŪvZ-Ur ,LB;'kpx (Z(Zq0nb9[ks(Vș|iirR:1GĨ. (q@)JJ1ۋc)} y콓TG=r/Ϋ:L"P8ao74=TO_=ghɭq,l^" OSz!# d0 K[W =s]j6uka*c.HU(FUv$`(F>d2FԤvLҼLP!ѽ')DkЯ6~m.0;e\Ȍg4#hh7^;^n\u;m5_v|hIHwbV~!,Ʒ]Pe䁉]RUwxp^/p M;M3kCz0s27)G>D2,kM]֟Q=½>7qm6}M6}?QFD liJ(19'ʍ(c BPYAk74E,`\D/Otv,ÿ;7k?5gWor;IDS1꓊9iBcb/]u_+X%^ GU"ZSNS BagX04BHᨸAz-֚իZ[QqK$I8)Ur'Ϫ,lRYb8Y-0s)I:Z:MR8uIRx Ƴ;yնU"}6)"׫J֔Z4T~wT"6"WsWzB`!<U< {COԏ'9 D/ AZO]`Fk-ds 5 N8ȤSF:IӘ~Z?^$L e㳓7?;:JŐwJr;9Xio7'YCXW!JTN R6sIN۲6mnσt%}ng!^++< J~˄I'u(UcRm^.iW_ ;,܃}̏A:C`Ǡ^'>-l9uO HfkMiESJ$6Pʌ@ @,EP }9$q3jAG=5|鏇Tp\k4ݨ3$ci\35XBp|ްK#lPCb-xht%[ņoUz-Xab֫xݩwq@X+B ]*t"(ɕ@S}p4gsޏw,le< $gGydN> Fj }]? tC>?Oxo5W[hrL:aww ͙~gŮr,< 0G ˃n)s:{)`0jϏX],^$ ,Q`H%"?d}2ݫdw(z]E72Ooܫw(3+q+vn$xo m1S3F EP<#]~xÉr?蝜%=ӪND9-QmP֣GIr(|NOH,d>4ma *ڙv:AjOۋc"b^BžqO7>ND{%SRBxU _5]?7i9ho\NkNP}U:EFձgO.J}w${wN+y;LQk_İ}|K7\B(97[A#"š&L dqN'򧃲3Q,4'=>taГֿw5VaQѱrG`񸑍';u:ӝBc0 0Hq'8M s%zl];ۘac M2N=92AlفrQȢ[$o~U6Oj濦"9Jʚ[,O•r,8k)cy[EݲѨ)TpK[ M^4g4M,$+?63R#I^4 4f؜Y| @5$H!6S`:P䯲_o5!VH,"5^uʮWvWVGNG[%Vz/C2ZA J+n'ѕkV oʂGQ?D,#xE$QmA(YI<Հ'VJUjc)e%SpC%c %maV%+lXAQU RtMR=-uV3"E XP6K q6BԲ&Q qrK(A-șVcO`؁ cqB+ pCsrk EQ18-~ !VOXEJJZ:+I4=Vi':ZI^kvv^W$nc"˕ Y}0F`Ah8F$R1w#ZjQ qJ唬Y4|Ơ <#KIYln[O"PNb5!7cbFHR"E ƥ^ kAfx+iIeDDʵ U[`8آ⍪M$I*ׂfmY”~N^68ilIMa˥ĖQNg,ċWE;;rFӰCHҒԂ.hS%(8Q3rGQF4A]E=#-PH??:*VXmXa*I6!xK+ 9yb%e-p,A4l,x#n0+׼ ON]^ 8AźMY6*0|e݆ZJ$g23 $M1W2 8!ܻ̰aJh.曃uT\ʼ%c6Xh˧9ʼ(7Y-{4}q@qEe`D1YۜUr1R@+,p C2^Ùe2cu;bA7d&tUǖ-FRp~))Ҭznͽ|]7@40`EyyLNl0 5(BĔ[X }Oa: U6ܔFnJT[jMPmc ߾bMxIT: i!gwgSBK"+ ˯.E&P> άosq%2l|1լ g_F` /Y_/[~D%eDπz;]c',H6 1a9V%X$I0Ëͅv7Zw9W7\TgO}kknO:y4~pJn$e:i7_k L%Xd& g)oWa7v#^1 >3s Y.ڛǾwS + ӫt2^hIJ;\7 u`n>ݸww~O6ӓ{ټĦCPl$;]? / e߹v׽3Ccw{{~}q=dp\J~}rt\#)Q-p gٵ*ャWg<`;DžMN7[A8/N[/{;NY?8kAﶟu 4J՟.i*9٭~`⽯{Cg$瓮7o;lE|ܧ"rdG˜oLV'<;]ϸBЩx>[_2:z5JSfM]% `?f5SE^LByDDiLso,~I۷ȓw]3Jb~p5e&iSf@=n7ۛZ=& f'y8ݵu{{3(-°s).r# ܆ ] Noп>;="W<bwS8ؿDz^T3J'zcdϋZ^9}6, $AQ9RZ#dbe4$d4ITsjBi3[l>ed,38'I} zI36޽{FQɳW!%~RbЫDm̅-9[ ĿGu,T ŢDɐp0R'ќ3)7Wh7m86SZ(Հ gfJUjjeliRM BTT3;\8j6U-Ĩti~o=|`3͹tkVo h\n>ۜn22“vQ{&-TyoZ>0q2|go{ 2>[RLՇ0ISA2{)H~ H-$OzQ-@eJ6 ɟ9P}*E:TWWX#Vŏ\]RVSF$+"x} I X"` KpF>Zk>}.UTAjG/r8H``63 VzROn) CH6tk@ő4[) ՝s7e 0̰ƙ;&̯##U\WUئC߸c)pRzIEQJu%1Iml"bLbaQB b2"&A&FPƱE!ʻpَJ." u }f8g<U$zbʥhؖ|c$%GMs O9WSUzBZd"]mU*ڢ q0@3 )M<]l,x>yyuL'13T ݒ>)][UL E,DVJM m |Y)yz.qk|q];Ξ\ * (U!ӇI t)ЅSMI,B'aX(IQeI,%VR02ŘJ3!\dk :ȇQ ]Ϭٹo=m|f/KW9K -*TlܝhDxU(Vc s8s~s]:JRj PN^^ XVp]4LҧNLn*A^[qä($w/? ;W困9<2&&0qpm)$o~.z%O_%Q6 T7 _n hP< &αgoY'>MB Ki6=w.Sr27atRgnKxr.!xbC9L20d7L,#3O հa|h9Bjz6|Ǡ}p[QM1jGϳđq D6u=1.tk%i]hV+>[g}d g_%R!ԡ..6$"Bש)UTamd* Q58:q(G=Q!VMG3JaEq]F%0лC4WrQu UG\h-ѠD9\0Wb\J"ף9۵OeQQO{UXV6S ( `Ruvz$bL6vkXXjdVPk 4/}݇DTiT0)tLxuĹ0Aq,ô&,Q/yF6bydU$AGBalQ'^ᜑ)Nww] 0ѤnV6}[Yt9FVKYQv(nqJ^X;A( KrzPoh*ĜL E|+œɄ+:ۜjTH&oP%Rp,lsj&VGEm*1VQ ibQ\ {t-"BBA]u]][A\5I ):SH9Mޘd F>/Z`T5e3܌>tbfr9_|)q#'nW%HcR_R*C0J'W U&V6aFm"XQ  ̩~ingWXtܶB ϧKG?~&7ôg߆-mY7,ժ1/JhEؐV"Q`uWIDhAHJԦ7'E(VmmyG* >X.J]8p*uQTbr{J0՜1cCb0b&ȸ:p8Q*N(TTXVZD Jn_ETḾ((OG} Ot"Q(6ZŹ,DƚX$\J PxRm%Nl*:2uoMq&Wf]cRU`oa9h"7P0ZVR;2֋X#b-Ja" b+4ÊkNqBlXМ̨G1٤J]D'&(r36TjW%FZ&xhW%,çN_ƙ)ds2׌=5 9Y9:|<={oq~FH3W[G ˈTZ#%ƥDŽnMuL]a)> je΃.{Pb#* hMAՐHjwUgbW;VQ RVUJqI]b˅݊w\%yRKp &/_eJɋaϝϽQ3B`9Ѿ8b YV=wX.ե<%e t%'QE:Umˣ!r9UJ.>%F׃؍P^܌ӤɳUn˖o;fv3O Lftت0Y74ՓthÈFEbnv_+i񥛑wĕB+|BeA[I'{W_Gr܂I"-rdϏgnx#%i/|f+ܴkuhoݴ-/$p/ *\vG1R=x(WEogdndNWes,m2s7vTi׫ \fMh5zNO mwt;W\}2sէ\ͧ&"E!&ȴVG(F#B*ӈ<(:$&!S+5+Ojwy?b4``~ޛ-x>'dF>6w>0-n[3&=m91eLGȼ\qd'+3RYv1 yaO<|No*}ιnW۽J@m_8ڻk0ĕEHz8lg zgP/~>cq?|OIplTl5hG/ti~_brR)_׿%bZ7+>ç3:@ڃ~ރKjxgv؛}7UtT<`窖kOU5d}\1VhThݴiXf_Vq[{mظ˪؋ $uXJЬВSZҏHS74:x_G7^z(%,*gQQ?3ܳS¢ VJQ։/Mv[vCbJ"[^$@<~nV sMqxyZ?R56.L3b-`fF߬%ÆUC4+tMrtb՝X*!ddj\( i%Bk, TS1B!KuF(M4 GnHe.yCZ}.ɬIv'ש7穦'=b}}r[ I RجB!!HLؘXPJSXNH$4SM ai5ޯb rEn7hpTS?WfN`7?xө[-_Wf_w˭0i?ozBwAmnw3h/O4.L - `*k:l~+K FR͒?eQNEk`iQ랳(x7f!IwC#w`rhQ:bV2;k(>ᑦIաq0GHAK,r`g}i# te.`dd5/e1 W8Bga[ڣS.i/hwsLsA>[W ]ț=z?#JI2, \})Yb(S`z.Lw|ԙREY4SQ|2~&"ΑI%8C8=Rg`HM Ą"%+(R$",&yeo˾$xY~@`-cZ>P1Iq$9kiIi_9B r뫑'O?cdơ Po ̦r1!{t41jAߏakGmx݄m@\~\>c3$~{e^-2etD_`̓t0?[Y: gG[NLⓐE>AcZDJqbK0D2MEt"@ēV6IMe. 8竩 | yƜ{eŢTdW<8=kBxax/3[]c6GRuA4ʨw`5Wy#Y7W-EC/m\Q =\`K76WK[l2T gV2_]KI7Ze2n:QKI'FH'`oep t(I肺VbЮu^@2[I$o m"jBw;˜r Svo_CAqW Y?F*NI'q^?hS{kOC9Z&,j&lb5ߘ꽖(k+Cט.2nRPP,WA:M7wx":=:[-"w4[ĠW-.T_103b |߱ЃyxE}TQםHM/6S.ҭCK|থإtc|SkNqK7){!|1:pn]ZD.>4-xXJ:f7>E sJ[)="#jkwV_'bjD9l|޹!KO B_Kzm@ i_aҗ d+(,T-&\ػm&˰ȅ5? 7r@k JY~T_7s){sz խ@l 1ŪY+ՓGPO@+0аkpϠWJ RS<_|],R3^.DcL[+thAUd6[6/Mtհ]>(9_%=MuqQ>6LiLaMD&"ݪqYDߊGߍ|<sos]wԃN(dL{_)@SrT Ǣm4T6kߖͻxkw%J ;"ЋN褹>i:_HLq73<ٺu2OwNJd{xׯ[ntމQ oQmy[ndoYoxSS.ŗ ×zD0ml#Q7nVMW^ANLIc8@AϝTBnhBf IgA“X9gHFӉcQhtM=}5ôA 7m'kOj$8ܲG#Xj>*9\ÎNr=IIu0+-jJZ hlVSrYFTpS(#Uk%yĉb4b(2x#cIbxY H-i+7O͗mQ(ܮD ĉpȈIlt97poQpw |I6=Ь& #Y{tJtbR< @f;z~FI00) ^K`en>9Y PܥA_GsOA9;Ǟl=;('Sb̹,{ BpD ;c-[+Svmox);._`xx#y.w|K^|]lG0ku/0x3J*n:+ua_8jwD3 pB-:BC&`bJJR i6n#҂GU7VT_S4BUd\E箞/9l"WA>NڊNrXV" Zz_ng]E<;:H;iWW5Za{+K#tQ6ew:w$1:N uzXG({ Ϗ^9*|HN.LJ]_]j5qpM\/8s\y&]<(]*ka#3K nehF8͝wj˖Y)n.;50`sw|x'&JGCNԗG 9$?-=4:x Q)WK:i\abA!Ƃ Aɓ4pdSIDBRrJ) ¢Vr#Eڄ0"4V^ J"f8r!i.`bl2>fw3 2'`MQh1up6[dfV768B{:_{W~f#`ak?v,NO;d:xa2W,w}t O=ZMT}kEp@Rrh \dza:]:z7؊dY,揋 H Jjuuh~f%oןr,כ>ijnoz6S~>!zӃt:(tЏ7=oOgdf&{&+>.c7Pۀ!FH=aXanֽW3)&% (JٕJqo7%q``0ne֟}_]l7B.ډ}L"^u SRa@ jdpߺs`DpOGޱv%1E yO΋Ԛx]s*ip4Dԓ$iMۚ=5ӼYV;sQaeMcP;\} *rCyTife.ĉj}ދtMԋt^:`!Q^TXEaRB]bQ"5CJ0ΔN1,VIQS|l/($WW(eN ^ڊ* 'y!ZV l`mQoHǰ qlSʄlZ$M'0@@iBsv`̦ oo6Sr&ۖ]kS"_{P֟@tVإY?PdWI%ar}uUyfY/olLBg?=C FY5=|ӻbW+/+O+=]Wy,֫WOirDu7!V-B rB%GjRK6zC% vV /jS#q'Td8RҩX$F \} n_p_ְFHE\>XcM~ HM/Γ'!/+8_YEiŤz^W3gFSoaןmlH yLQ~u*x缨ҭv5|=zf)rV*p$SQyE% lrbUM %8/9J*ZH΍3M dpnH❯1QhƧ3H*B\? xa/>ak,F:,2xuLeGҝˀ_/n,] QP6ˈ)S[S/#eŌ䪣ek.5GBM9F4[aBNU{F]}GZJp ,Es:ZIɒƌ5T /K^4vi+niv(ײոL^]$fݫ+tcΓ&TL_@fLNҒM0[F|f5aL- :cT`4Q'?L`J46U},o߲ZP3^_,ug\(NKʝW+w^yrW+:I"$;]9u(AD1Hbbs$X?}>e,\t:ݤ?U_t}ʉD 14ʼnm 5_V߻LCC[s}9M*%/WbX)PGPM&Ţ`}:əaF)4/n|G,9'+SC- /_?>gGzk|ʏC-jCㆇHN!rȱ!"dsuS|XF;P*+Dڄux z؆V' ە@?wTi $L6:P\; hX}T4 MN  ~~73_ > d떩NA{sTo!Ÿ~f&5b{O/>eR6K:YǼ`Q*\:OZp wurDnMFc\ N1 9Ǵi խy1AfF5BiF2n:H=֚8^T;qv%խzѹAVF)cK YCCZSԵY6|EpRM Iv߁ٺOѾK5ZSigۡ1 ՂkRl4 8+Qx8I6SFqR5nTꐜ4ĩIp$Idٜ&1hT..r!/ OdZbi0I[ CoL9$rycs"V&f)9!8İi#$(Eʈ:2fVj JsF>ǥ@\rЩ$"!tX@(Rαe6,E'i<]rEh%8Μ%Q={|qV]J x'W3ԚRrZƁm ,dLdf aNcTZ8%4Q@a:BJcA 4AIy%嘒O&".vo*0ibe晏L]M(Cr[l4wڟ4Xܩ|?_j lp #%s්1lTL(1攄9$i$ gAmJY`vGkka.?Ou櫙fS 4sj ! HFC::>O#(X.@Y(@3k-q?fnߢG{x@L}[jY6 /^oFLh2}~(ۥ?'߰ "DWW/*A+ջ{;/ɘS <& 8d9Dz5 >bd܂M/{jjqti]K29(jz]֮ `N][<ލqn>*xyyz{VhuudKn@"Дܾ]ޓYdx= lOCPRsx#>"mI`8Lg`9t0Xc=)L#4BhdfǩÔbE0 Z^4YAq1ã|=x k+8=-V2Cg=LFto"(L0TOٲ30Vτ&uFiĊ X݌NkPb* L1*c z 2˃9U_ Ȼ讖V:f~_ɲ`a0߹D0iIiSR1wjG#X$V>8JИ L}ő=4} X:e_NŋS |ٷ'sp{ឰ3#K70<0)&J #R\I(KZ,A1 ke,i{gFo%-*}ФT:FlEۍ%x=e:JhBk?ۺi[Ghs{'"TV\  3-DapfX= :kO ~ we?XvCz``,@+rFu h+䰼h25qE0q9m.Gf8 ~l!!岅]$[H1hj.=Qr ~.,ڷTYxvG]9wF#a$2"qCe!+z h;eW0#kӵuU٣T\,_+kzO&5bb~ts+Z]-w]9fE^J OM3ϴlvSdv‹Yfxozkh=GV4eL9iYd#DSb'#`bL jp /=8ڰ\ ]JWC$ =?eFGtiZPMl Lʴ e8I,E*15]+#sJ`Jc#T9\ S.ղ4wAVS˱v_b^.+qiG}±**ͅ.`\NjAԲ8I-~xP|1 (DRmq1f&FӔb8)Cw4aL&ciJ**K.02GME̸I`1)Xb-2BScdCڍl - 5j+*8Yv#(W]k-#A>O  i.!м˥w*͇ #_|WKkuiB}HzWcLNIG^X.`Q e"\i[x}o}\5lJX@h.аXp$/X<]&vy!gia\L>tKV\Kv}j6biT0*x6S+^+y\u7HiAkm$E";?4 L1 R]l%u!dT Yݚ1rI:>\ 0[j{1iTT#˱q`Di\B.4;^@]5Ꟈ5G(Fk7[mTY67lεQYϜk [`uܰwnwEg_{{ iJ{%F.7[D8Q҇9ʼn`Hyކ".ၠN:Bh=k_8֋!xp u .nR)8vɺ3H$>:4mn5 \vtvDWY-!7 @?BL!8FWOC9LrȌf ?mYTƣk4C8qKJSdwlfϋ״cgȡkZ..}FIܵH<3JJ17C_ERS2 'SPM#wB "#L5)XuɹA0h"*%GnEݼovD>fa-cz^.oAO47To *?op ;sG&Y˹zmt\*` Vq8 `p"2#˂U'8٭6_fW$ÐudC̤ka4nEN#xp!QBV|Ǖ"pDBگ UIgbq^1ZhRae$2@wVd?2ǝi%euD:%Jm]^ޮy&UTMJe^ )Rߤ86|fv2*JqVHTE. ?U-r:ʋ8eks0+ Kv挗'`i{Nx}>`]5̩EȋR_x]e@<,@ -?hb kaۣEz2z~pxrpj_R,D^`~sĥSm3UC:s#w@X``bO8qqS1bߡ @ȫJ0P"s* $()̉kl0Ջ:A|8 (NKG:C@̫@%Fp^-ud:Rd^ 2?mYJne\bL)Xg[¾~=Jf%Ί|rR C1 H"1`D90BOqY4"A(l{;+^ZUV51~o7*CxDQ0vI C0 㐲NR,1bL"*$"L)gaK Ӯ 85ɔQ,$Sr*տE($RPA'% `$ aDqS mI"Z}}KtS==޲}J\_?| ār/#q_fuu7F @8}~tzqMXN|A#y .@>Wk{V i\wU)GN_ʛe>ꆷoo c5sN[ 'wCC2P~yK4FC.(D@^Rx)!< R Ǡgc{JcwJb;jj zyZ$%3Z&XM="XI`DJmēwVŸDVa]Akx#sɐ*!4u;E9 AhШ/n ƒqcx:p ?7N`e6)$уkY8 'qL&$L_lVqreW 1PQoKn3q[֙h8g`̨2* NPY},}=lQF2[{*Qk;;"jn*"c:u{@jneAgمĆW ,K-9vP@ס)_~kB9Zr"5Sk#%Yb ;RG;$GI6z,zV?ۿM.2z?6tZ 2MߑIIx7K,l{?5F)&;Jab*}Sry<ҲjK7VA6{s5dӯO>E)lk7jTR5j4Jڭ3[Cڭqu1[ ''sʲ7긁PQ#CURqH WoO"x "*X)cɠ_9԰Q;xvj8v%~z_e|rjyadvEQ&̺`-14Z*ɗwLI:d(DR+j.Lx{T{3CxcGEn:QAd1F]B頲7Bs׳]\6JsSA 1 &xBx D sƁEC tV惭q7x Q1:(leo8s\aE\s2ɨky+3⏆џVtX9-ߪn8wuU[HѢ_Ċn&C@{4)_&]zȷwQAw)914дEbN3^߫}V/qep,Ѭ jf=gHnJu売@ "3oS&`~Jm5ZW/(wzWf4lz,*fz /7ñ9sơ#̇3-:iu;D"O p.E^>{Y?"1-#>x{."~L {8N+(,S a<NKB 'Y03o4 (4 )omo=L̻]Ƙ#`ܺ6*cr3 &*N9˜RSN"! 4A L  Q;+W 8Fе11B! x3! Q"SQS@bN#t`&lD'uuΘCR^sR,Qc7slwy8S5C@P2LE $IN!rH!1 hS 00q*A`4qW*eFR'w=`u{V#'3!UQh+یd*%rJI8D& mSj^ B 9̾lYj$t͌E]N3_]QTϥHچP'_%h6zMdL _oV/[:o[]§*CLq HL$faN5O syoʅj\Д 0OJ s` ʇ<᫂ acs3D: n<߳E6Vj2$!*P+!@^@Iv@c=<.z]XN%%+7d q݀B^< T\`uJ4{+/nwC(_M,Ϗп߮avk™49ۋiӺYxC-qaJg'G6o'\K. Ē%M"RCSWA+0t bH؊caE*$HMD#]?!Q܍Tƨ+~cc~:u*4%j1D7b*Ɏи?9KӺv1#CLNv x`SV%Č;!}n:}z` \wfo ?ng[=\ԇ/_ߏzU8d>>.A]3t.@,@wN9g-܀I6L9)lE_Ƥ:{`Z'^z)=:c_o ;@>/]{K>&@>C 'I!O{'3_KQiQjI}ٻSrQ͞P1|7uQ/Җmxs C@)eԏ\vu 9wul'HKFS97pwVŸDܰzd7/5zS ѳ8rӏPd qfU((jnhuz7ܒ;8a%N~pLxۅ!/v5$u(^o=jLi^UGՋ^H# t}bͲ`F~ӟ6ĨIT5}ᖲJCUA#7tl ұc{>!\M}́ 1 P Yb@1 ?T,8f\IcR< 1COL:o|Votݕ x2&VzKL[#T`_j̩=Pk=ֱNDIC>2H3Y?4)IǘH{V2=lRX L€˅WVZY 7G]&gA˩.XSt-Q8s`=q^j$|P84dO8 T8C0_P{º_L^"<ҙr7)?|xp Er.!x][{閚^] N9w\xrLpNbՍ+\}u[>L[m\@wTH+!GN? JЉc^VB̠[ qd0CA|=F76B? {Wslսca5, #a\H-_?ֈpOL8$3nf{So3 v x~kضWf4}탈A؅A8;8;'R+WI܎g\O!؝j7ʼ][4~.$}e ֝mIJgjf0{o8УGz=6YN@.B)Fu`FF*߮1&㙎- }r١r *Fiά%ZdIWMBxWū[&F?7~)aPݟY:e^\vJ ME$-"k*Z )#*q5W$.$cLI? X'}B?P BN2؜GK[=C͕T>o#K.F.|+"T5eb_}-wߔ*#zr}c9VATPPDZC^nPj&ܑzXfcsSuND!jhlhF"$O%mbr/9Ŝ~2Unn=^eak71NBlr }##iynnRBR,t#gQVH!I(Ngn#7d8;8Հn*־@*SX}s1q]DTz7ogH.Nb`XJ־s/M9q2v ޼ 0\0Ѕ>ϡ!N{:!nLɒ>wAT@amTtW\ʍĔ\E9F,{8xn֣Ie3Ies^q1 =JcF$ (!bCF1/# I^B1CAaŀ@B L*0"Gaġ (px IŸW2Rcs 1 /%rJ3sq"R#Pj%n9TќĒЉ(>1 !U|J#|V " ]: :BJksn8ZÓt6ı}dHxR1)+  L+E,d $*䀘Q:= ZMi.&<9ެ|CTTXjEkڋe㇏Gfr Az߿߳[/̖ێ;x__7;>ՙ~Lc?3כ}fv>퍀d3,@io6zAE= >=>2/N8r0`MF^P1dQ+͉-(V]jzqηzm^|47-5b0j^"O&id])o,57z,I^4&\z-Ivץ!h2lfMԢ"9*2A>t+o،z{vژ 5jKSِ>Q_Xp,kYx&yٙO YSĎ^\ s.L;VnA6ߎ!<{tV%]'G遍Hmi/~VBP. ,7zA~)qS.gzh9}=wIĕڛCb$|"=_#K1&vc@5C? v #MV]ǽ?+qw}YϛTc2[3g9;26NZlb!Y7" f)@DQ8e%24Y!vD)\DO,(P8)5PbJ¹~]Rv F!pH8s/*Ϛ5 !s6#=2륧B!-+2D 7)|Rc28h@g8ER + tVxМс[- pn۴CRy:Q-ͷ^|e2- p8κE]ht4q@S ƮhnDIA׾|}gm-kJcJ]ڭ9[Jh&ӷ}3[ps %OJ_Jj|׊`*[B\i;(o>P{ Afmzoj55j1ՏI="w6|9*T (3WOӐ;w62k5VMŲ>Jn-Mi+wt!k3q$D~ as n evFBLVCm:FHY0רπЫiqa=?ɔo&pu˦x:ꍐőhtUhօQzBD5kYz-3/aPHs",ɠ-0HM,,>չ(<^GHS›3WK("S)\Rz7֮|5\t:v0.4snfI愑nqS;.bp+]@Wuw2$oԝ;#Ud;dT!#E%q; NZwV0#?*Ljh7fMZE ;M:2bqtt֒7pA܅t{hD1E8o P6uJaA^5|]@;0Pֺ>фI@M /&]/''f;3I ټeQ!HW!2DaibCD"$Tay;FsC y~ڛA@ ro§>@ij0EH*xlZ/$OOj3r WIB5l#ML}™,OC6&秤# nFt-#^Z^ 3Qo_ C3Z'(3Pwh9uʐWG>b~{P x} '7F̷+P,m:X7/J%w x'D-ػh%Yʎz⬚XG{:GmZ @oR 6# ߎi%P5[DߵsҚmDC⺲LI}I]5|]5\/U9Z udQ(ז/f^mߨvꎾ?h L[y;j7pP]3Wr Q6j (,<V|X=2J#s=cx==esTׯj ݣ PcvAo!=LF &k{~z0`)PIqTkEc*yLɤ”7uhjj "=pߴ| !O=1k\%:T,9=VJZ !.Z$a|j6,S'pWʭ&HQ3iQOjZY}ZYB4W+3;3Z9/4s'砡 8xzXzg[]ndqMTwaU*STڹ Au ,v׷{Tk]i1$O֧y/S<[ά^L?{3HiizGhhH<ؿ=EM3!9f@WTͳ8O34nUXS61oSB[qXipt_%rEɬ*-@9ϏEE8{ Eq}G:(iN5n>$/.eJmgkMit-sy "˖լ(8c)A1˩}+)9lYVZ.|G,ҒFSYp暮)/=}û,#MCt0'Xɔ*$"?O>(p!]R!͒>6_F [d7 4W9ۉQO>A<'A7M'܄_@Y`HL@(JdJ6HA{m^R&5 q0H9Qo}f0 \GlT\jZUVzX *%*<:@gJ@V9ys:;UG@38܆za &BFuC<ò??t~U$ԫ? wLgABu{CQR.Q=J B6EZ,iW,f5}0'A QLE(VP{O_wձy|ӌ$pxed\{`/oQ+p㼁bG]yV xMH\9nE;٣"$V*hS\'j~+9{1N 2~$$"BlNZIL ښzQNsz_)GD 9{Y740r)Ķ5#gMvݿ[_mhf~0SL!p^kYoܞd@m-[ozh睎9e_kM%S}~Ϛw_zl8| Uf9E+w.ضv%D‰`DjevgcRvaSkcyK.Kd-dH/z[#OӜxHw_6-9kL|ޏxyw~~dOq!,.h IhPG;|u(&9D\qCvQ,SEAu<Ό'>t i5&v7G=qSL?~OfWώav8AJǿWFȜS)L`1 5Ww* i­ [<Dc*?ƽXR=S˻x瑃Og';4%qb*O&laɘ>L5c"eom;1Av Z1 x:qV;]wk\iNM ,܍%2RELa n+X?+|J2sg䈒 ,~閏0F~rƏNh_՚5Au%\V9B^jI-نۺ"u Oo>L,܍RVj*%Ɲ{;T3(=MԨR.d6}-*mcgOtfm+]9M0Q[qHMXRi{2y`v;qormlv)72j}my}j0er+dM(6?$e}#zK3SR*r{L,Dlw{rلUZ2LzȔkvChQFt\QFhw1D yn)͇EIh-=]n<wn-"9~RV)^F<5?WX  +jIPjsr,e2[Ns2M!Ax ƿحtPN/fv%W4waƵ˯,^7٪ H6:u[u-ݝIS\_~#;rZ6:uSkO_t1&/n_L]/S.>ף?X9DcZh1kruhGj'.k޲2 GzhvDBF.1K1Cd׳wٻqsd6,FX>xޜ,^؎EY9l !~q■ܡ81fs%wн[tV@5wⰺÃW7a2(@)&o|sj{ut."R,O3@cqʺ֘N2q sy_jT:̻&3Jxn>$0yN*X3tg>HouK(MXD":Kz]!ƛi1z5Dz)J X)K''['FnUx x!q5zP8;|>|J4p܏X&fpC0[!r#h})4jAaɇrP*He "?FO[n!'<$lc>}Hh! E% ({@fL6 OGӪ Fn;n[{a4z}oDE6fSْ5}O '6ߜ}2p^k`W#C8<|(QG{Ǎlv)gS.~BfYͦ}iw.sT'6B2y8I F (dps8L-űykv)D7la>+V}@1&x#Ѷ *UQR*Wf׳@#m3/3:R̩+)8 :{C@;3K9(L~."eg >/ܚ&}zTpnݑݑݑQQtOf@B` W 1 %crN~~hwp$ ҆]@$< D5.F78"\r͉ B5F 3 b8QG0dddd]0 X #I2 0 2P(8+_uENחHq8DBK[7s&A,0X 2 b#q̢8$H[,L5\eoGNz3tbla]8{\go%ok,%}o ˩ W(| O R \pÈj?06De"VCBD#RE(ň:fG~; Ѐ`]SDzWTÊy?Tlu0N8'(+ZĦߞYI,H#BxJLOf1#=&DAh(@#w\ WU R?d'1`Teb+;ƟHpyБ4MHε,ysb&xN7 )><--ZӪO6?؄[ q .O\X^!/NJTw{`U![knkN/뾈+EN:LtqsAU9lw.u1ӌxJp"{rQ])(k!(ظauzh:s*3Ke…_QREXs7In:npߵ9=ELZV;E>tozcRHl@Uӄ4G023 x/~(NCH ȳپ̚?h5?Y'#h=6g,ĵF|m^/{4BW$"㾍a(olQg^wT{/A@;]OWQ{t0^̓{Gkx Lvxwoޟ<ڳS;g@CvBvs'FgqĞͯ `m_=;\0Kdidqn2Jҡ?Bwt=r١GwzʎaB>9=:;r] wL71<e O+gvtz-+$#>ǷJ1QN >0R#s'ƀރ_??FVᾂ?<>6JtԹK2wNVeOV`ɢ1y1Anw4mh0ׁ~ӏy`O3[7b8<{ǀt2۟O~>?'XvZ4է_!a8y²7sa~/rM]g2 2t 5K\,sOv+f6ffnn` :A0sqK͒RpS q^#M94e؜xXyўYybz=\sPyLf. `pqQStHsi$c!`aāƑdJ|GTw*y?$Go$ s+s?~ kjQ!t\30c1LDŽpǑb@˄ʘGR ";Sf:wi/釟/LO= d$N[=T]v#dRŰڰ\z|2kbD_8"Gf/41JjF^bNϨ l"^0?rj|8 3JSN;%vF?=8a!`se\+ړ-#ОHl ìm-kEX3%TR0̲OwY( &nƉ@hCXivM`9=?E5;v.zvz[l[9WѰ[&QL:}ld|6L_~w|nD7я&e.5^0>ld.F| {l4(|<"eEbh)TKZX'qnf< XJVKe S DI$F1QQHNbB41cLYu_P)TEl hQuXKUEv(vTeYT٬ƌ3ʐlI,cD82"L5}¤^TT1-1ҨF7UfA% ~-HZl:S,{6j.,2F&"(eQ <rJ 1PJIɭ"hhH>'rehwVoq$m}8 T[EYn cJS^`=eKǐiwyٵט2&(݌j%+~FF.*6k!|a=)[(@FD^pt q] ghMB 7M8TnŸO?ݣ:$D&`~Ǎ?.%iLb,fh"1OG},@Uh*O/ZTFGOD*% aȔ[[`CK\f˻Y4{*r]LU;=釘H3Pk{ZӉB (a ӎ#3 rQ\;M`Z,EeA2|xYL+)FEIHg3]^@{2^6ђqd:*s.=te1nbee}爏FjV[Vҭ GmCGr N׭]Бbr*ťfnH \vE=Yaڹ#,C? "իx~û_O?=j:txj^}jm4-5˝ɕ')]%Ezn+?vi;/)k?ľ RCs$a$ʗAu`2)2}}~ҷZ􂗆J-y_ohC$rz6dUx@A~S0Eu|OQ!Heo %aOhey5Nm%ڿc݈:~1o-6I e $^0>ldo[bd^f4ߚ)sМ1`:GɲU6@F4e@9)E(K DM"l"iAsiSbVAsL hX6XZ ,`ﴸPPJ q bO˜FY$<B%C&:$N]q\ܣ-:W7{Csr5|"Lc\?Gw5ĚD 線\W, $m}k* XtT?x5 Xe-.eAk0aEQf?&+:aKXj(f(@?SAQY3r]Z^i E߳EqFD"[^r [MaL BDEc V}Khq:mڒ[npTT?[LQcwbw^a EPbI;O'X!"!O-]$JR;rOr+־JSoxxTENJpʝ֛%E*B%:@0 t*4?ghaFmm\.f=p&Z0ffF.-iiqy XMVػ?~>4Ľ.Rߎ?Q>LSj(<^\\ Eǫܜ YKi3iɉf3ux%(Y9JAI1Sȵz83sip3|.A\Yɗw%; {XiT@|R")$cAC!B^Nrt< ǥE+F3ڢ Cfx N GqbFτ\&zrSRN 2>8S|ڣd9Rsib,IGQ^= Y# v6,sDep/(Fb'gWɌ>)`H_3}v_flQE~ullxϗE\msb mTo["O|25n1xX6տjg<=^܂z8rܾ2=|-Ұ\.KW}M a@Q,e? r99~1tGuSj]8(w-W*Ė^Z[_] -;dQ}%ika[]N[k"a:-a Vv^<.˭@(P.cBokl%{Lb{6T3 a 6Aկb:(уnߕ ػI/.#X>V`stCUhWnJ߷%]g55ﳲu~ֆBBbb.tL7G5t ,5G9&o"&/&? EUF/Z1/VjjM􂭰F/pw[J6k5_ #0_Dmt*$úSՔuwZ֝*eurM riƙ̌Jp>)aSy sj)F@Qeo;oXvwDzYc\gq ufijU{[{^]g+JrŖ%{-S^C(ܣ1=(`cvL?,c:VHĊcqmŝ䒭MHl*.+3 K`z p5+ѮؔUIԮ*jĦ6g wޓ1,ꊞv7+Ǭ+t;#XV6 %)})iK'=^cE(+e#Pc9* 7ax wQ{y ‰/^Vg単Qp_w^ y}-B<>&]^'OoYt$WkrJ܅q=**?h]S-StxyuqP"Mή.ϯ.%wD Z(ˣ_'؝L.7RSU~("嫋R6;3Qltޫb~lĻvOK%V7I} f)Aq"<ř?ߦX\׃]2d'we7/ FӦۅ8*os.,.n q <`ueOW_˜2A5γWa&ח\17}F/ZR&,WLhLLMLFH;{?_ KG?`jSLh{H{%l#ݽ)a`.pqE27M 蕤Fg笃2h(ݣ[Kyo[Ym1~afF kdz%10t-a{\Gw Цۻ?:H8_Cӄ~8JWk,Sc`?sLH\)P7dw9diiҎQ*ƺcY")ز,}KFFuM:zΖ;aR1kwQ]'K/lwe/NH_gl=dROfg厳ra}aq|6C5Ez4'lIdMJ04sMiefEպ'/ܬ_S Ƈ}~r:oWdc Ui4gy愖~=nL )0%6SB{~XYEN1K"%/"E"fiX&!XVQF"M^>PBf7H4qy;S7}-Yu9kk2|wlyof⚻[峿*[)6{[d7Hxt5 X ~-FKr:9ql bo)[>00$Ŝ(Sz84ETLA 3:XNT3cQ`Ad 5|f^-_ɶV#+Lz* n rd a3]d%OZQcp<$/eg&KӓIhZ(gnInץ_Fsf)zVj @P r^b&hEd#̓~)7=B0.-C(w;}tޥ#\ 0n͊}VlB='/Qtrx=LKז ¼nXC2Erp4K,.{u}ؠV.Y=Τ W0ʕ$YGIRPg^G+_vM;nS.yowhw38O_kFlk7|+Zxc@-kXD5..I`5E9T#n'vKItF5%% m;ئtFJ M4:`8k {-Aͅ`hG*a( *~&P}Z"!o笐psãPOuDj;-7(vmۣ *כ~/i-23$C6FYRW ƅy.}Z-_r^oa>yŎb qVxGTFOD=;fg ߣGvNkGjCe\r]0 etuݰS܃cLvM?SvC1_J_R 8gZk TiMX!T"٠d砝F;†#lr+^F]$1z]VpAu^\hYܺ@Z46fBN<ĬdLSn+z6xSXh"Ld6v ɝdi-xGfl~j l뷲mAX6٤UY6[8CIrϕ" T{(_Yԑ@Mulф0()Vn(&ͶԻrP Vrm9gEcciD7bKomhDhXs|6Y}\QKX4f}Z N9hYH%Gf,^}.i]E\U2=ks/TUVRKnM0FfB=BQ{M_cHICr(a!͇~7ЏI,Dj\@yZJBÞ]1ЬG-40kɉI -40L*ޤ,PmzV DPO(1&@Ė;ؒč)c3-RgKB꿩@B gm!ĕZq) ma M~J72= 1>LtətRN@_s2T1oea'Ql€bz#~_r j#~>ǻm ̈".|vC%nӾH΍=^Oo-=73hƘ!ou*;ONܷAzKuLo^8zsÈrID[Fk8:a64+L=+:h ;r yBL]$'s3+]SݱLmpۣ2C o521[lvGoCO&c9NGy'&׻E ,u^t{!n8HL'zBr1u7_DUjd0:Imd_o3H ?ku xwv b\lZP݈ERF:SnY'b! *ŢvU5J,΀Zt#I-Ғ;<[bAXͪ ,u`% J Z"BpO׺Xd9G,${Zl3V#IsV2PR*G:l\j=1P`Bi&P)Mʹk K0]EP-*yNaD XkF*UA;ӭKE63Bź" chYMQNB@RF1ʚ$-!jq ]q[B-a vN{S`tEdV*A"PY jt}(}Gst܄Z$+p .bj&oBTsU$:w˞00߻#F><8|g~t{Wb&~Oo蹦z#w%qC]xB8Bj&J~!Q.S'*M1~a̍{ד*Q-l;Yb|5R.k;pݛb]/+nqrO3wrs/[SlPP5Y+9)Ju 7ˆ/Y7 5}x^ޠvl -:}@m =fu؋~'V^?1nk~O}5wxv,;ajpjH2BL؈.b_#d}P/s26t9yL=O9HS:s6&k}|I%+5ȱ' {+ݷFk)$MAAzΏv`2Ӧ[UzE u$"69ErYB!6Z 3I۱)P&)z R22nmSW7z0} RH?쨦g" .h.mhs7@ \GVESE=„9'y>ʈ:*oFɧᾹkɥ ih49COPB 9 ,gGrb0qgwrrCh8IRjxCvv{$j(1'UfB5Hvjm&8I:jPk1yK BrZX RJ S=Euit"B0q{MS_/ 0Z`(, RP&Y(Y: m!X@yɄUt9sbxT9&:X|KI%+96`@+Bȭ P !Kz(,=Jm+i44gS+8pN V ”'3ܔ 4%"B1:* 9(߫; +# AGwտ^}q䧭 w?lZ\I%/Vko@رdpA{RuRnF4xI,mD_Ȕɨ{+Lf"<,B_?9W)*:Wut7ޡA^!Ipު2W7 "$8f (oࢤ~T=rfq]ѪޝQ ]Z/>Zީ^|?DTmA&RTꈚOlPR:Ŏ]+m3qGVPj|Nr j qf{141}J-MԼ hPv$`O^o6pg 9Τ]î{v;C"5yA_,O/I2S{ ܰ< '0'ݰ볺vU5MJeݒKm_<$؟=SFpo'1)7(r=[pgcDKd&~-_ <+`NM4= n!W=d D'q65(k{"O/gN * }y$䗥i MץְViY8e|8@ayRZCd)i3.19Ш G1i!]ZE4Syx oݗƦѡ"Y}{"?]IЧ?}ԚljO_ˏO}z7P}&{1arLظ>w~b%=fhwwgfa3w+l'_~ztwT02!D*@ɻezȷ}cCA( h+3IecTT 9Rʎ{b3W\LP(EeDlnZfelB׌ܷ.Xʝ;cBT9zcC; xOOԊ;RjqGA9\kw^WDk=![hz$w4kW㊳ZZǴ0Pzƺ?Иp @nJfϊaEgVĴڞϚ~m~wF0qD?n#*z⮫]W"w]չ^ς YG8 RRt +#q #:p#UY eg.4uc6z{f ݒt (1>tϻ{)+ʀD Ak( 7 /'#8#Lc{|c`E&]勛7]q`'_aWUr1҈ mB)U'~yi!P{y@rYvI PM%|}r^ (=%I)Qu I e(;p*Vu;Tzk6 ҉.$_t -5oyc.Pj ;ng&\"ö́2BdKY\(Gg %v3GTm|hy6Nâ?#TAn/:67z݉l z.J coϏ_MzmaF {ٸ,6DBBH3ޱƓ‹䭫G1h8VFRaE(-JF;.AJ ZꥂEXͩ5UvIsMl+) }2JZCz?nIQ˜8*rU䨫QWu9[oM XZhqƠ.i-%@ C@=j)ĹG\iJ89w27%-Cad-)3[톍M%I66W[6ccí sCJ,]5+u(h8#J{d)!9Ø/Oﲄ\VPTt30IF6 GFWZYho,VPY,x`In)RGUr ǬFGd#mE"Dƛ#jtӏ`"BjN4xM<7irJ&T1IOv^f`#%Ne>ȧ {;qEPw?y#gzȑ_iK? `/ &yMږWg&~ݲjd")q$b*ź\nlf6ٙr.x; %^\׺z>ߝ'Mrm_aFfn}ϧK?/ۂal]!ixx=x}u1}}ޯAFh;u?GxpPJTy˜ۓlϖM=ON-kE\)1{^p +%|qvBZ{8qKk_ZjMK{H~쟉P<$:,AS0#d@Nyq>c~¨Ad|wH@}pk+$>D<_U^oMV{a|؁smc 9lxnKߚyO_]-FRqEy]`H>zPvc[Ìg %)g5)=2 ˡp8uIk2_ً'>;o }Iy4Kv>aHp8XdIxc5`RCv`Kp/1=n:c_8&˩_t̜Ҹ_[~l.(k"Z"^wuCߴ~d*DHNOe"~Cz|c=J-%Ed%J9-)rW/N"1$ʼn"an$RSbyƄ㙴OS3NR6 9k0$`'~:i,Hc7>JbDAP!hht F,*#A1: )ɘ(NejCNSj-̵Xj邫Pc``K~MSdIɸM⩳RGB3"9nPY+R+Ȫ#Hc$"Rn96<#T"M0J水gr `im0vK̴"M*+eAXyp("(!%Д &I)S7Lde+aa ֙t{vnD&A 'Jh8Yn%?P!2Ƒ! 1$«4meJ#c&`Nt潾c2)Ua4Qg_ i >U1ND;},8 Y'1 Ѭ\䡴DPY_WgMY~5(RIJ't%Ɂ0` D idFuZ!GXck`=( ne vk[`(х!vE v9`$Jfij  ;i@Jp6puQ5!`v\^%elLEXeH|6ӣh4*A0>тWK.c;H GDŽ+#ѭT`-h5!Df6\cB 8N⸡Tg.OA7331g^Pwb{7Q Vj n l"cSo===\%uH*.:u` GӉAM$ׅUO֛ٛsA臜+RA=2ϙ1T9,-=1_li"qH}NYG8EG:AbCU΄$ܸj$%>Srmn $1.pJ]<=8Q*uLu^buïHaIŃW2vkA}GvSD(D n"[hEdL& ?}ޚ5-L_12"?{R\<A齽&LJl9988W gr izzuQ7fջ+ij t{+?,rcaӪ3ժ RLSL;oB m^t]m Cy(tt¥BܚzkR׭fXL z,:u~nAszP1J +cAx|yXv(l.Ͽ<̦ ^ x*Ýydߋ;)E TVv3%!立R}2()Z3Q+OMV1=} ۄ yj ޳Xvxߗg fB+ZI[vvӱdeY,Offcz3́+NpKb񎓆\׷5d~LC̗~I Nl۷I`:0:qxzpK~%6('QbeX@}ɝtcl׳8;)§6zv2YY3c0RR҉_b9?Jy[\Aπ%58G: a+ LwwA+ ЮޠQ 342(CʇIf[!MJL3w IZ o4m*^WXM?-a+JKkW^8v"Pc)VĈO6 !^'t>k 0)7yzEy JB=k昰ioi beDOYH MR\%yDIJERBNP7ZY|I:Q4ٺ\NK ͺ6zU?}x}]P0kv~i%8 aqο_-Mui42JԡHr(E ;˳RO{ %]b|"Mc}SB0~EQ/xVBɰ:)o$=Uzǫ&Ş"oeT| GO5cW=ԜPxݫ[2y(8'~,ar*ݮb/VN5 wڻ lsrh=u9$5W$0sJ GHVe'dJ,K eE#!$efH(#ac1TBN.ĆԘPH:ϸ҆f4ͬ cTg6}ImfUt$z%.X#XA@\ia:LS3Œ;A3R l$BFRQk4+b#c1kQ㬕:K2ͽ*cI,1*PB:B`g$;C<݂121BJTրoL=?/ }QEۍ#XЇ qOC]1s_wO!# o>w!" "Drܟ}(bȇ[0t?E)&]s-:-b؞,s%aBx10⚅+ a@XxyxB Bfi?to#X82ùůi%XpLJwJ.V./XhP!4[Ś'I Vn(4ɰ#E CMaډʃ:P}ޚ5PǑ66cĈjyr]ݡ<;uß݀G6 +@)oY!:'PcZ96F>>SfRKI#KIxxzjLK{*^x(p;X,۩V Up} )~Ta#nI5e>\De.(s3j䠻FBqM!}A{3Hۘ1B;1 ]1v<7:N#݄ Ɏu{.x{{G3 UeЩSRBp%qdCC K%Ty4[ V/nV|f`Xl#kvr #hG(|| XJZ"UЎr](F+h븺X]la6V?f˿_X=޺tO+-\^n|3onG`#.'OqY;76u_>v#7iϓA_b!Eݍ_XY\v$!߸ۀ˶v+` i&툧n "[2%}լd]4\jk_Ńx̕$ 0r'HL fb&O0LK WI*"EXCoo Ka4?|Ճp04\@0fջijHh">b fޓkm+7ƾ(e3$kE[kc;rw(ycYRdIa('G3yEG@ U$]d2 Εc~z13۾Xi;q\)O<㺠HXccQ #Y>T2s,pkѰ4 T;aOe'K$88[H/p^j]jCVe̐X5{>y@W#<ȑtZW iLė𛛏q~tSR9Ϻ*tgF+6}a~m[$uq^N(^6&ep[byB RRmPG4y>dݠQ֚{Nw};s4]^+1I뿆ElStw<;<`FiJ^K*lg5Zo1GJZityġlct:aOP9S4IJ:\m(F,_n<npۙmҌ[{9>ok. =+$2fS{,|.xIyd`l6n[:G{ڻb2@Q.Mp[[X9c[ >hh:~]vbtݝR*\ܢ6~ o)U+Q-xG^b-%OdSV(H1S_ Dcbp\19BB,EfԀM%@4L* (o7E0vG,n9 Zo0ʹE拏 *j)[ܾfF輶CϿF59y0Az| HR'affz3y bgG\z08X4'Tu9Vcٸ\(ҧ>ZKgZZ OKaZAAߦՄtҽRR4#K ެp!tzcrq&.\O_Z;J1h}OqZ4 &i ÊLp5Tu/^pjy)h89Ykfypzͬ*~K+Bq!0r17!pJzv"_L@U9'Ma/8_NtŊNx>y@l<>zu\q-5?&cn/*PT) TOQ_B ru}"!XhK0_,É!a;z˘T HmJA:Zٕy٪k0fY:ט36G͘Xb0k"E51~\7k=\]XxT^aFai]ﶩ tF7ՈW$q%k56b0ZcI^||Y)'9'("I:M<^𦅡DNHμ4!8:eYWgB&$G3͒Dj B "K{DJw ` ra\l84*: .Lh! DڒrsM ]rQƱ]JD9-&Sdxe71ܛ?\Ll*G տ&={G\}5$QC9{BcY:N>M<ՠ9 ֵ/N5KՆ`Af`bb.WT+:HUyf} [m(ᕓBH\3 u \^!mw>[ 1^ "h@F)ۻ\]5@a 4:)ƋZTf͙d; Z.Tī/8VZs$T-??f?OXG?ʣ!@&;v^e”LjWsn&{Mrϊ6ycR08..?+Zb*bsJq7Ts3ZU _ʀɠS h0r\'n2N\zyA~_N-E7RVRmܐ^}q;pw GZ-\]c0$뿦thփی}C o|}#fMgvTS};h>j}Z%!R-tJ5K[K+sM8بަ.A%@Yн]R꙽:kLI5QC3Sq:l̀&c Zݟa;ss#׶L-hs68;]wOqm52}.e*Ow_N60|nIj ޣvr}CߩLdhG&s(GMUgyNT\9>^yPՌOY~>ZxL-P-q׿gjHJmvNWRf[/WWʻ@ь흔bQHQ[h#}ax ]~f*`+u3wrGZg\Cx&?\|>_<z 2*T&eI3--!bwÿ|gfoiO8'W(v] =*, X A] o%SHv]sgG(?rJW]炳1XZp^<\|Y9_j-dӆPK_عuo3'0S|V9|}yt1|7qQsK -t6Ǎzfa]6sH<(var/home/core/zuul-output/logs/kubelet.log0000644000000000000000004042536715144463451017717 0ustar rootrootFeb 16 00:06:29 crc systemd[1]: Starting Kubernetes Kubelet... Feb 16 00:06:29 crc restorecon[4697]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:29 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 00:06:30 crc restorecon[4697]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 00:06:30 crc restorecon[4697]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 16 00:06:30 crc kubenswrapper[4698]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 00:06:30 crc kubenswrapper[4698]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 00:06:30 crc kubenswrapper[4698]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 00:06:30 crc kubenswrapper[4698]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 00:06:30 crc kubenswrapper[4698]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 00:06:30 crc kubenswrapper[4698]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.909708 4698 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916464 4698 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916499 4698 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916509 4698 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916518 4698 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916526 4698 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916535 4698 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916544 4698 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916553 4698 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916563 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916571 4698 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916581 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916589 4698 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916597 4698 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916605 4698 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916618 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916649 4698 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916657 4698 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916665 4698 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916673 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916681 4698 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916695 4698 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916703 4698 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916711 4698 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916720 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916728 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916736 4698 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916744 4698 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916752 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916759 4698 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916767 4698 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916775 4698 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916783 4698 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916791 4698 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916798 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916806 4698 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916814 4698 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916821 4698 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916829 4698 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916837 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916847 4698 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916856 4698 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916864 4698 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916872 4698 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916880 4698 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916887 4698 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916894 4698 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916902 4698 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916910 4698 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916917 4698 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916928 4698 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916940 4698 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916949 4698 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916958 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916967 4698 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916977 4698 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916986 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.916995 4698 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.917003 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.917011 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.917022 4698 feature_gate.go:330] unrecognized feature gate: Example Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.917035 4698 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.917045 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.917054 4698 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.917063 4698 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.917072 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.917082 4698 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.917091 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.917101 4698 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.917109 4698 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.917117 4698 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.917126 4698 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917337 4698 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917357 4698 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917371 4698 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917382 4698 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917395 4698 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917404 4698 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917416 4698 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917462 4698 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917473 4698 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917482 4698 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917492 4698 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917501 4698 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917510 4698 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917519 4698 flags.go:64] FLAG: --cgroup-root="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917528 4698 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917537 4698 flags.go:64] FLAG: --client-ca-file="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917546 4698 flags.go:64] FLAG: --cloud-config="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917554 4698 flags.go:64] FLAG: --cloud-provider="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917563 4698 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917574 4698 flags.go:64] FLAG: --cluster-domain="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917583 4698 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917593 4698 flags.go:64] FLAG: --config-dir="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917601 4698 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917611 4698 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917646 4698 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917655 4698 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917664 4698 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917673 4698 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917682 4698 flags.go:64] FLAG: --contention-profiling="false" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917691 4698 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917700 4698 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917709 4698 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917718 4698 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917729 4698 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917738 4698 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917747 4698 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917756 4698 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917765 4698 flags.go:64] FLAG: --enable-server="true" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917774 4698 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917785 4698 flags.go:64] FLAG: --event-burst="100" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917797 4698 flags.go:64] FLAG: --event-qps="50" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917806 4698 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917815 4698 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917823 4698 flags.go:64] FLAG: --eviction-hard="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917834 4698 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917843 4698 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917851 4698 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917861 4698 flags.go:64] FLAG: --eviction-soft="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917870 4698 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917879 4698 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917891 4698 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917899 4698 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917908 4698 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917916 4698 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917925 4698 flags.go:64] FLAG: --feature-gates="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917936 4698 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917945 4698 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917954 4698 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917964 4698 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917973 4698 flags.go:64] FLAG: --healthz-port="10248" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917982 4698 flags.go:64] FLAG: --help="false" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917991 4698 flags.go:64] FLAG: --hostname-override="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.917999 4698 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918008 4698 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918017 4698 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918027 4698 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918035 4698 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918044 4698 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918053 4698 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918073 4698 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918082 4698 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918091 4698 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918101 4698 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918109 4698 flags.go:64] FLAG: --kube-reserved="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918118 4698 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918127 4698 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918137 4698 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918146 4698 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918154 4698 flags.go:64] FLAG: --lock-file="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918163 4698 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918172 4698 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918182 4698 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918195 4698 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918204 4698 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918213 4698 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918222 4698 flags.go:64] FLAG: --logging-format="text" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918231 4698 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918242 4698 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918252 4698 flags.go:64] FLAG: --manifest-url="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918261 4698 flags.go:64] FLAG: --manifest-url-header="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918272 4698 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918282 4698 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918294 4698 flags.go:64] FLAG: --max-pods="110" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918303 4698 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918312 4698 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918321 4698 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918330 4698 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918340 4698 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918349 4698 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918358 4698 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918377 4698 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918386 4698 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918395 4698 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918404 4698 flags.go:64] FLAG: --pod-cidr="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918413 4698 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918425 4698 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918435 4698 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918444 4698 flags.go:64] FLAG: --pods-per-core="0" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918452 4698 flags.go:64] FLAG: --port="10250" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918461 4698 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918470 4698 flags.go:64] FLAG: --provider-id="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918478 4698 flags.go:64] FLAG: --qos-reserved="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918488 4698 flags.go:64] FLAG: --read-only-port="10255" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918497 4698 flags.go:64] FLAG: --register-node="true" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918507 4698 flags.go:64] FLAG: --register-schedulable="true" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918515 4698 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918530 4698 flags.go:64] FLAG: --registry-burst="10" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918538 4698 flags.go:64] FLAG: --registry-qps="5" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918547 4698 flags.go:64] FLAG: --reserved-cpus="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918556 4698 flags.go:64] FLAG: --reserved-memory="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918566 4698 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918575 4698 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918584 4698 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918593 4698 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918601 4698 flags.go:64] FLAG: --runonce="false" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918610 4698 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918650 4698 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918659 4698 flags.go:64] FLAG: --seccomp-default="false" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918668 4698 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918677 4698 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918692 4698 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918701 4698 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918710 4698 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918719 4698 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918728 4698 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918737 4698 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918745 4698 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918756 4698 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918765 4698 flags.go:64] FLAG: --system-cgroups="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918774 4698 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918788 4698 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918796 4698 flags.go:64] FLAG: --tls-cert-file="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918805 4698 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918816 4698 flags.go:64] FLAG: --tls-min-version="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918825 4698 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918833 4698 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918843 4698 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918851 4698 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918861 4698 flags.go:64] FLAG: --v="2" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918872 4698 flags.go:64] FLAG: --version="false" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918884 4698 flags.go:64] FLAG: --vmodule="" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918894 4698 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.918903 4698 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919102 4698 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919115 4698 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919126 4698 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919135 4698 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919145 4698 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919153 4698 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919162 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919170 4698 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919181 4698 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919194 4698 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919202 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919210 4698 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919218 4698 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919226 4698 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919235 4698 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919245 4698 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919254 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919264 4698 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919272 4698 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919280 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919290 4698 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919298 4698 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919306 4698 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919315 4698 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919353 4698 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919362 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919370 4698 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919378 4698 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919386 4698 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919393 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919401 4698 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919410 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919419 4698 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919426 4698 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919434 4698 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919441 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919449 4698 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919456 4698 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919465 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919472 4698 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919480 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919494 4698 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919502 4698 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919510 4698 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919518 4698 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919525 4698 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919533 4698 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919540 4698 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919548 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919556 4698 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919565 4698 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919573 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919581 4698 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919590 4698 feature_gate.go:330] unrecognized feature gate: Example Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919599 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919607 4698 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919614 4698 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919646 4698 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919655 4698 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919662 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919670 4698 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919677 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919685 4698 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919693 4698 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919701 4698 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919710 4698 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919719 4698 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919727 4698 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919735 4698 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919745 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.919752 4698 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.919777 4698 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.938753 4698 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.938811 4698 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.938929 4698 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.938943 4698 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.938951 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.938958 4698 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.938964 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.938970 4698 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.938975 4698 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.938981 4698 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.938988 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.938995 4698 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939000 4698 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939006 4698 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939013 4698 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939021 4698 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939027 4698 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939035 4698 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939041 4698 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939048 4698 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939054 4698 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939060 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939066 4698 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939071 4698 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939077 4698 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939083 4698 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939088 4698 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939093 4698 feature_gate.go:330] unrecognized feature gate: Example Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939098 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939114 4698 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939120 4698 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939126 4698 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939131 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939137 4698 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939142 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939148 4698 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939154 4698 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939159 4698 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939164 4698 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939169 4698 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939175 4698 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939180 4698 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939188 4698 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939196 4698 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939203 4698 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939209 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939216 4698 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939222 4698 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939227 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939234 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939240 4698 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939246 4698 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939251 4698 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939258 4698 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939267 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939274 4698 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939281 4698 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939286 4698 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939292 4698 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939297 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939303 4698 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939308 4698 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939313 4698 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939319 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939326 4698 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939333 4698 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939339 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939346 4698 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939353 4698 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939358 4698 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939364 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939372 4698 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939379 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.939391 4698 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939646 4698 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939659 4698 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939666 4698 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939672 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939679 4698 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939686 4698 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939695 4698 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939702 4698 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939708 4698 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939714 4698 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939720 4698 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939725 4698 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939731 4698 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939738 4698 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939744 4698 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939750 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939755 4698 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939761 4698 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939766 4698 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939773 4698 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939780 4698 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939788 4698 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939793 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939801 4698 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939807 4698 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939813 4698 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939819 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939826 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939833 4698 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939839 4698 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939845 4698 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939850 4698 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939856 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939861 4698 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939867 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939873 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939880 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939886 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939893 4698 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939899 4698 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939904 4698 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939911 4698 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939916 4698 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939923 4698 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939931 4698 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939937 4698 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939943 4698 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939949 4698 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939955 4698 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939962 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939968 4698 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939974 4698 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939980 4698 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939986 4698 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.939993 4698 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.940000 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.940005 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.940011 4698 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.940017 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.940023 4698 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.940028 4698 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.940034 4698 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.940040 4698 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.940045 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.940051 4698 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.940057 4698 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.940062 4698 feature_gate.go:330] unrecognized feature gate: Example Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.940068 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.940075 4698 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.940081 4698 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 00:06:30 crc kubenswrapper[4698]: W0216 00:06:30.940087 4698 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.940097 4698 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.946567 4698 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.955379 4698 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.955531 4698 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.960030 4698 server.go:997] "Starting client certificate rotation" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.960074 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.961313 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-02 11:55:25.169946687 +0000 UTC Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.961491 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.996471 4698 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 00:06:30 crc kubenswrapper[4698]: I0216 00:06:30.998554 4698 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 00:06:31 crc kubenswrapper[4698]: E0216 00:06:31.002598 4698 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.025820 4698 log.go:25] "Validated CRI v1 runtime API" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.078541 4698 log.go:25] "Validated CRI v1 image API" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.084104 4698 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.091245 4698 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-16-00-00-35-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.091296 4698 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.120768 4698 manager.go:217] Machine: {Timestamp:2026-02-16 00:06:31.114953926 +0000 UTC m=+0.772852758 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57 BootID:5fc85dad-076c-40e5-8031-b86a3144865b Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:1d:1f:e1 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:1d:1f:e1 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:30:b1:c6 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d5:bb:2b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1c:3a:59 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:66:82:53 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fa:38:99:3c:f3:ed Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:2e:ae:1e:ce:bc:86 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.121275 4698 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.121550 4698 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.122483 4698 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.122939 4698 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.123019 4698 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.123418 4698 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.123439 4698 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.124477 4698 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.124553 4698 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.125927 4698 state_mem.go:36] "Initialized new in-memory state store" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.126107 4698 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.132112 4698 kubelet.go:418] "Attempting to sync node with API server" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.132177 4698 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.132233 4698 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.132265 4698 kubelet.go:324] "Adding apiserver pod source" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.132297 4698 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.137839 4698 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 16 00:06:31 crc kubenswrapper[4698]: W0216 00:06:31.138935 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Feb 16 00:06:31 crc kubenswrapper[4698]: W0216 00:06:31.139003 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Feb 16 00:06:31 crc kubenswrapper[4698]: E0216 00:06:31.139100 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Feb 16 00:06:31 crc kubenswrapper[4698]: E0216 00:06:31.139130 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.139903 4698 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.142993 4698 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.146010 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.146060 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.146076 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.146091 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.146115 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.146130 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.146143 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.146165 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.146181 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.146195 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.146214 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.146228 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.147409 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.148242 4698 server.go:1280] "Started kubelet" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.149409 4698 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.149408 4698 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.150337 4698 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.150415 4698 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Feb 16 00:06:31 crc systemd[1]: Started Kubernetes Kubelet. Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.151611 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.151685 4698 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.151856 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 04:44:11.328932183 +0000 UTC Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.152035 4698 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.152068 4698 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.152225 4698 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 16 00:06:31 crc kubenswrapper[4698]: E0216 00:06:31.161276 4698 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:06:31 crc kubenswrapper[4698]: W0216 00:06:31.164612 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Feb 16 00:06:31 crc kubenswrapper[4698]: E0216 00:06:31.164774 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.165685 4698 server.go:460] "Adding debug handlers to kubelet server" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.166973 4698 factory.go:55] Registering systemd factory Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.167025 4698 factory.go:221] Registration of the systemd container factory successfully Feb 16 00:06:31 crc kubenswrapper[4698]: E0216 00:06:31.167125 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="200ms" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.170668 4698 factory.go:153] Registering CRI-O factory Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.170749 4698 factory.go:221] Registration of the crio container factory successfully Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.170905 4698 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.171312 4698 factory.go:103] Registering Raw factory Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.171402 4698 manager.go:1196] Started watching for new ooms in manager Feb 16 00:06:31 crc kubenswrapper[4698]: E0216 00:06:31.171210 4698 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.143:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894915f1c6bc176 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:06:31.148192118 +0000 UTC m=+0.806090920,LastTimestamp:2026-02-16 00:06:31.148192118 +0000 UTC m=+0.806090920,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.172779 4698 manager.go:319] Starting recovery of all containers Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.180581 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.180759 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.180794 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.180821 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.180846 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.180872 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.180899 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.180924 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.180955 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.180985 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181013 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181039 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181067 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181100 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181132 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181158 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181188 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181214 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181241 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181266 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181291 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181319 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181345 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181369 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181395 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181423 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181564 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181600 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181664 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181693 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181833 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181863 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181895 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181924 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181951 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.181977 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182002 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182032 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182061 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182086 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182152 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182181 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182206 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182234 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182259 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182286 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182311 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182337 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182371 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182397 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182421 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182447 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182491 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182557 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182587 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182651 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182686 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182712 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182742 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182770 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182801 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182830 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182859 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182887 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182922 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182948 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.182973 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.183000 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.183027 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.183053 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.183079 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.183106 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.185707 4698 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.185781 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.185817 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.185842 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.185866 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.185921 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.185947 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.185978 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186010 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186038 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186064 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186088 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186114 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186141 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186168 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186195 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186221 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186248 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186276 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186337 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186392 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186421 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186446 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186470 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186527 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186553 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186578 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186608 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186679 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186711 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186740 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186767 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186795 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186849 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186879 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186907 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186937 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186966 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.186997 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187029 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187061 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187087 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187118 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187149 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187177 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187204 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187230 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187254 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187278 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187306 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187332 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187359 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187385 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187408 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187434 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187460 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187487 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187515 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187545 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187570 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187597 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187659 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187688 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187715 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187743 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187774 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187803 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187834 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187860 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187889 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187916 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187943 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.187971 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188004 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188030 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188058 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188087 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188114 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188142 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188169 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188195 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188223 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188248 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188274 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188302 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188329 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188359 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188386 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188415 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188443 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188475 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188504 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188534 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188568 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188599 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188670 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188701 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188728 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188752 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188777 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188804 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188831 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188857 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188884 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188913 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188942 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188969 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.188997 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189023 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189048 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189075 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189100 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189125 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189155 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189183 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189208 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189233 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189260 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189292 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189320 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189350 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189378 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189404 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189430 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189458 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189489 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189551 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189577 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189604 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189665 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189695 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189777 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189864 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189900 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189928 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189957 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189981 4698 reconstruct.go:97] "Volume reconstruction finished" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.189999 4698 reconciler.go:26] "Reconciler: start to sync state" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.201323 4698 manager.go:324] Recovery completed Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.221824 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.226101 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.226154 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.226170 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.226482 4698 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.227239 4698 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.227270 4698 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.227343 4698 state_mem.go:36] "Initialized new in-memory state store" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.230296 4698 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.230368 4698 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.230417 4698 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 00:06:31 crc kubenswrapper[4698]: E0216 00:06:31.230669 4698 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 00:06:31 crc kubenswrapper[4698]: W0216 00:06:31.246391 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Feb 16 00:06:31 crc kubenswrapper[4698]: E0216 00:06:31.246509 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Feb 16 00:06:31 crc kubenswrapper[4698]: E0216 00:06:31.262437 4698 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.301452 4698 policy_none.go:49] "None policy: Start" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.303597 4698 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.303692 4698 state_mem.go:35] "Initializing new in-memory state store" Feb 16 00:06:31 crc kubenswrapper[4698]: E0216 00:06:31.330847 4698 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 00:06:31 crc kubenswrapper[4698]: E0216 00:06:31.363522 4698 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.368138 4698 manager.go:334] "Starting Device Plugin manager" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.368231 4698 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.368254 4698 server.go:79] "Starting device plugin registration server" Feb 16 00:06:31 crc kubenswrapper[4698]: E0216 00:06:31.368719 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="400ms" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.369049 4698 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.369083 4698 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.369398 4698 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.369540 4698 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.369564 4698 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 00:06:31 crc kubenswrapper[4698]: E0216 00:06:31.382647 4698 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.469452 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.471262 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.471337 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.471376 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.471422 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 00:06:31 crc kubenswrapper[4698]: E0216 00:06:31.472425 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.143:6443: connect: connection refused" node="crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.531534 4698 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.531769 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.534549 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.534650 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.534664 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.534936 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.535438 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.535524 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.536312 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.536359 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.536378 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.536661 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.536740 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.536795 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.537514 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.537551 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.537561 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.538207 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.538229 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.538242 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.538269 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.538284 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.538294 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.538425 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.538520 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.538547 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.539389 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.539421 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.539430 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.539529 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.539810 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.539890 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.539893 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.539972 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.539988 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.540649 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.540686 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.540696 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.540835 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.540861 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.541363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.541417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.541442 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.542087 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.542132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.542142 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.595449 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.595490 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.595523 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.595542 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.595559 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.595576 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.595762 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.595838 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.595892 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.595937 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.595987 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.596053 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.596103 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.596155 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.596201 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.673428 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.675191 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.675263 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.675279 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.675472 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 00:06:31 crc kubenswrapper[4698]: E0216 00:06:31.676325 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.143:6443: connect: connection refused" node="crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.697543 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.697583 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.697616 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.697663 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.697685 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.697714 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.697734 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.697761 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.697787 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.697814 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.697811 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.697831 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.697907 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.697936 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.697961 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.697839 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.697984 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.698115 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.698119 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.698148 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.698156 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.698207 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.698217 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.698223 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.698131 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.698244 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.698300 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.698302 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.698331 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.698403 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: E0216 00:06:31.770187 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="800ms" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.866877 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.877450 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.896153 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.905397 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: I0216 00:06:31.913814 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:06:31 crc kubenswrapper[4698]: W0216 00:06:31.981481 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-cf2a8c7930bce86b8c126f1a72b10880566b67e1999891cf0ab00a1476854244 WatchSource:0}: Error finding container cf2a8c7930bce86b8c126f1a72b10880566b67e1999891cf0ab00a1476854244: Status 404 returned error can't find the container with id cf2a8c7930bce86b8c126f1a72b10880566b67e1999891cf0ab00a1476854244 Feb 16 00:06:31 crc kubenswrapper[4698]: W0216 00:06:31.986218 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-70005d4058f6cc4398488de0a790a0052b86d2ddf2afdef2c73058718d523fd6 WatchSource:0}: Error finding container 70005d4058f6cc4398488de0a790a0052b86d2ddf2afdef2c73058718d523fd6: Status 404 returned error can't find the container with id 70005d4058f6cc4398488de0a790a0052b86d2ddf2afdef2c73058718d523fd6 Feb 16 00:06:31 crc kubenswrapper[4698]: W0216 00:06:31.992281 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-60e06b866e3a9b5a5e6da59df5f4343fbfadcec605cd2aa2ff3abc87055f20c6 WatchSource:0}: Error finding container 60e06b866e3a9b5a5e6da59df5f4343fbfadcec605cd2aa2ff3abc87055f20c6: Status 404 returned error can't find the container with id 60e06b866e3a9b5a5e6da59df5f4343fbfadcec605cd2aa2ff3abc87055f20c6 Feb 16 00:06:31 crc kubenswrapper[4698]: W0216 00:06:31.992714 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-fceafc038b2eb0b29cc17920a58d054790b699dcf6c7ee0ca10eb945ec1f3937 WatchSource:0}: Error finding container fceafc038b2eb0b29cc17920a58d054790b699dcf6c7ee0ca10eb945ec1f3937: Status 404 returned error can't find the container with id fceafc038b2eb0b29cc17920a58d054790b699dcf6c7ee0ca10eb945ec1f3937 Feb 16 00:06:31 crc kubenswrapper[4698]: W0216 00:06:31.997069 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-0743a247bb4829d30f8014c26cf0da72ae9c59f69b97c5bc81177f007aa81a40 WatchSource:0}: Error finding container 0743a247bb4829d30f8014c26cf0da72ae9c59f69b97c5bc81177f007aa81a40: Status 404 returned error can't find the container with id 0743a247bb4829d30f8014c26cf0da72ae9c59f69b97c5bc81177f007aa81a40 Feb 16 00:06:32 crc kubenswrapper[4698]: I0216 00:06:32.076513 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:32 crc kubenswrapper[4698]: I0216 00:06:32.078573 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:32 crc kubenswrapper[4698]: I0216 00:06:32.078645 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:32 crc kubenswrapper[4698]: I0216 00:06:32.078663 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:32 crc kubenswrapper[4698]: I0216 00:06:32.078701 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 00:06:32 crc kubenswrapper[4698]: E0216 00:06:32.079301 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.143:6443: connect: connection refused" node="crc" Feb 16 00:06:32 crc kubenswrapper[4698]: W0216 00:06:32.118059 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Feb 16 00:06:32 crc kubenswrapper[4698]: E0216 00:06:32.118203 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Feb 16 00:06:32 crc kubenswrapper[4698]: I0216 00:06:32.151923 4698 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Feb 16 00:06:32 crc kubenswrapper[4698]: I0216 00:06:32.152031 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:52:14.671581297 +0000 UTC Feb 16 00:06:32 crc kubenswrapper[4698]: I0216 00:06:32.236402 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"cf2a8c7930bce86b8c126f1a72b10880566b67e1999891cf0ab00a1476854244"} Feb 16 00:06:32 crc kubenswrapper[4698]: I0216 00:06:32.238089 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0743a247bb4829d30f8014c26cf0da72ae9c59f69b97c5bc81177f007aa81a40"} Feb 16 00:06:32 crc kubenswrapper[4698]: I0216 00:06:32.239810 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fceafc038b2eb0b29cc17920a58d054790b699dcf6c7ee0ca10eb945ec1f3937"} Feb 16 00:06:32 crc kubenswrapper[4698]: I0216 00:06:32.241673 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"60e06b866e3a9b5a5e6da59df5f4343fbfadcec605cd2aa2ff3abc87055f20c6"} Feb 16 00:06:32 crc kubenswrapper[4698]: I0216 00:06:32.243924 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"70005d4058f6cc4398488de0a790a0052b86d2ddf2afdef2c73058718d523fd6"} Feb 16 00:06:32 crc kubenswrapper[4698]: W0216 00:06:32.300954 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Feb 16 00:06:32 crc kubenswrapper[4698]: E0216 00:06:32.301076 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Feb 16 00:06:32 crc kubenswrapper[4698]: W0216 00:06:32.342524 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Feb 16 00:06:32 crc kubenswrapper[4698]: E0216 00:06:32.342603 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Feb 16 00:06:32 crc kubenswrapper[4698]: E0216 00:06:32.421891 4698 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.143:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894915f1c6bc176 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:06:31.148192118 +0000 UTC m=+0.806090920,LastTimestamp:2026-02-16 00:06:31.148192118 +0000 UTC m=+0.806090920,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:06:32 crc kubenswrapper[4698]: E0216 00:06:32.571587 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="1.6s" Feb 16 00:06:32 crc kubenswrapper[4698]: W0216 00:06:32.698397 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Feb 16 00:06:32 crc kubenswrapper[4698]: E0216 00:06:32.698544 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Feb 16 00:06:32 crc kubenswrapper[4698]: I0216 00:06:32.880446 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:32 crc kubenswrapper[4698]: I0216 00:06:32.882095 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:32 crc kubenswrapper[4698]: I0216 00:06:32.882147 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:32 crc kubenswrapper[4698]: I0216 00:06:32.882158 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:32 crc kubenswrapper[4698]: I0216 00:06:32.882191 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 00:06:32 crc kubenswrapper[4698]: E0216 00:06:32.882761 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.143:6443: connect: connection refused" node="crc" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.151809 4698 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.152328 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 11:44:28.353281287 +0000 UTC Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.182991 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 00:06:33 crc kubenswrapper[4698]: E0216 00:06:33.184296 4698 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.248353 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493"} Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.249923 4698 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b" exitCode=0 Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.250066 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.250111 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b"} Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.251043 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.251080 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.251091 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.251958 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4" exitCode=0 Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.252016 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4"} Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.252056 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.253052 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.253109 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.253127 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.253847 4698 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489" exitCode=0 Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.253924 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489"} Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.254037 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.255396 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.255428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.255431 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.255440 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.256314 4698 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b755a5466b8fefa2667afc5e8cc0a6f22f583ce8800b10f8836d9957364d5315" exitCode=0 Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.256359 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b755a5466b8fefa2667afc5e8cc0a6f22f583ce8800b10f8836d9957364d5315"} Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.256448 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.256639 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.256674 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.256686 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.257408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.257436 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:33 crc kubenswrapper[4698]: I0216 00:06:33.257446 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.151600 4698 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.152664 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:14:10.058483818 +0000 UTC Feb 16 00:06:34 crc kubenswrapper[4698]: E0216 00:06:34.173254 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="3.2s" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.261888 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fdcedd751c83fbf60506332eb79ff3d8e7bfd67099c0bcf36b1acfff96b35bc6"} Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.261939 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0f2915589b18e0db91214ca20d06f488bbc04f6f8a83bd4ccaaf294f99fc4aa6"} Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.261949 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"60a206ea8b608682c6898dd9051903dcdf19e39c22d1adba760b43177b474a99"} Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.261950 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.262933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.262966 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.262976 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.265292 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012"} Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.265318 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9"} Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.265332 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4"} Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.267712 4698 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021" exitCode=0 Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.267779 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021"} Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.267886 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.269119 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.269152 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.269166 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.271593 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.271639 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d514d688c0cca450cc92f9bc0c0c996bee10cf7d03b1c4b2e30d5afd36db85df"} Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.273701 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.273724 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.273736 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.281393 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974"} Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.281454 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe"} Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.281467 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b"} Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.281593 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.282833 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.282881 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.282898 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.397809 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:06:34 crc kubenswrapper[4698]: W0216 00:06:34.423865 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Feb 16 00:06:34 crc kubenswrapper[4698]: E0216 00:06:34.423967 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Feb 16 00:06:34 crc kubenswrapper[4698]: W0216 00:06:34.458002 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Feb 16 00:06:34 crc kubenswrapper[4698]: E0216 00:06:34.458090 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.482920 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.484520 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.484569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.484586 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:34 crc kubenswrapper[4698]: I0216 00:06:34.484641 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 00:06:34 crc kubenswrapper[4698]: E0216 00:06:34.485604 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.143:6443: connect: connection refused" node="crc" Feb 16 00:06:35 crc kubenswrapper[4698]: W0216 00:06:35.079218 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Feb 16 00:06:35 crc kubenswrapper[4698]: E0216 00:06:35.079309 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.151520 4698 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.153695 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 06:09:26.629238896 +0000 UTC Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.287224 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f"} Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.287486 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401"} Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.287638 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.294563 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.294898 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.295602 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.297283 4698 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca" exitCode=0 Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.297433 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.297386 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca"} Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.297768 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.297807 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.297868 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.299071 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.299169 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.299143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.299358 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.299387 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.299334 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.299749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.299791 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.299809 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.299950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.299981 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.299998 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:35 crc kubenswrapper[4698]: I0216 00:06:35.476089 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.158649 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 10:41:55.037414242 +0000 UTC Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.307925 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.307979 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.308037 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6"} Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.308103 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.308114 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf"} Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.308143 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5"} Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.308158 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9"} Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.308914 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.308943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.308953 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.308970 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.308984 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.308992 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.444931 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.445212 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.447172 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.447225 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.447237 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:36 crc kubenswrapper[4698]: I0216 00:06:36.464232 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.159204 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 14:48:20.537551951 +0000 UTC Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.245186 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.317989 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d"} Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.318096 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.318161 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.318268 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.318296 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.319230 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.319271 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.319283 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.319654 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.319703 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.319719 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.319801 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.319847 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.319860 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.686078 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.687700 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.687760 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.687774 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:37 crc kubenswrapper[4698]: I0216 00:06:37.687806 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 00:06:38 crc kubenswrapper[4698]: I0216 00:06:38.160087 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 21:31:11.714112099 +0000 UTC Feb 16 00:06:38 crc kubenswrapper[4698]: I0216 00:06:38.219954 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:06:38 crc kubenswrapper[4698]: I0216 00:06:38.320110 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 00:06:38 crc kubenswrapper[4698]: I0216 00:06:38.320164 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:38 crc kubenswrapper[4698]: I0216 00:06:38.320250 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:38 crc kubenswrapper[4698]: I0216 00:06:38.321386 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:38 crc kubenswrapper[4698]: I0216 00:06:38.321413 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:38 crc kubenswrapper[4698]: I0216 00:06:38.321423 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:38 crc kubenswrapper[4698]: I0216 00:06:38.321736 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:38 crc kubenswrapper[4698]: I0216 00:06:38.321772 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:38 crc kubenswrapper[4698]: I0216 00:06:38.321785 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:38 crc kubenswrapper[4698]: I0216 00:06:38.852728 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:06:38 crc kubenswrapper[4698]: I0216 00:06:38.925959 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:06:38 crc kubenswrapper[4698]: I0216 00:06:38.926159 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 00:06:38 crc kubenswrapper[4698]: I0216 00:06:38.926211 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:38 crc kubenswrapper[4698]: I0216 00:06:38.928098 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:38 crc kubenswrapper[4698]: I0216 00:06:38.928165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:38 crc kubenswrapper[4698]: I0216 00:06:38.928186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:39 crc kubenswrapper[4698]: I0216 00:06:39.123407 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:06:39 crc kubenswrapper[4698]: I0216 00:06:39.160687 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 11:56:53.410187316 +0000 UTC Feb 16 00:06:39 crc kubenswrapper[4698]: I0216 00:06:39.323375 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:39 crc kubenswrapper[4698]: I0216 00:06:39.324807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:39 crc kubenswrapper[4698]: I0216 00:06:39.324894 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:39 crc kubenswrapper[4698]: I0216 00:06:39.324919 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:40 crc kubenswrapper[4698]: I0216 00:06:40.161463 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 17:24:17.963428584 +0000 UTC Feb 16 00:06:40 crc kubenswrapper[4698]: I0216 00:06:40.326052 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:40 crc kubenswrapper[4698]: I0216 00:06:40.327105 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:40 crc kubenswrapper[4698]: I0216 00:06:40.327152 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:40 crc kubenswrapper[4698]: I0216 00:06:40.327164 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:40 crc kubenswrapper[4698]: I0216 00:06:40.876512 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 16 00:06:40 crc kubenswrapper[4698]: I0216 00:06:40.876796 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:40 crc kubenswrapper[4698]: I0216 00:06:40.878166 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:40 crc kubenswrapper[4698]: I0216 00:06:40.878226 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:40 crc kubenswrapper[4698]: I0216 00:06:40.878241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:40 crc kubenswrapper[4698]: I0216 00:06:40.932431 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:06:40 crc kubenswrapper[4698]: I0216 00:06:40.932683 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:40 crc kubenswrapper[4698]: I0216 00:06:40.933846 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:40 crc kubenswrapper[4698]: I0216 00:06:40.933883 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:40 crc kubenswrapper[4698]: I0216 00:06:40.933898 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:41 crc kubenswrapper[4698]: I0216 00:06:41.161746 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 12:41:16.605276655 +0000 UTC Feb 16 00:06:41 crc kubenswrapper[4698]: I0216 00:06:41.220379 4698 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 00:06:41 crc kubenswrapper[4698]: I0216 00:06:41.220479 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 00:06:41 crc kubenswrapper[4698]: E0216 00:06:41.383033 4698 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 00:06:41 crc kubenswrapper[4698]: I0216 00:06:41.845609 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 16 00:06:41 crc kubenswrapper[4698]: I0216 00:06:41.845926 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:41 crc kubenswrapper[4698]: I0216 00:06:41.847491 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:41 crc kubenswrapper[4698]: I0216 00:06:41.847530 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:41 crc kubenswrapper[4698]: I0216 00:06:41.847543 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:42 crc kubenswrapper[4698]: I0216 00:06:42.162686 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:35:19.25878094 +0000 UTC Feb 16 00:06:43 crc kubenswrapper[4698]: I0216 00:06:43.163413 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:43:39.754662906 +0000 UTC Feb 16 00:06:44 crc kubenswrapper[4698]: I0216 00:06:44.164291 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:51:23.972363336 +0000 UTC Feb 16 00:06:45 crc kubenswrapper[4698]: I0216 00:06:45.164654 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:20:16.224321442 +0000 UTC Feb 16 00:06:45 crc kubenswrapper[4698]: I0216 00:06:45.477367 4698 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 00:06:45 crc kubenswrapper[4698]: I0216 00:06:45.477468 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 00:06:45 crc kubenswrapper[4698]: W0216 00:06:45.731867 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 00:06:45 crc kubenswrapper[4698]: I0216 00:06:45.731990 4698 trace.go:236] Trace[1204771714]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 00:06:35.729) (total time: 10002ms): Feb 16 00:06:45 crc kubenswrapper[4698]: Trace[1204771714]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (00:06:45.731) Feb 16 00:06:45 crc kubenswrapper[4698]: Trace[1204771714]: [10.002268921s] [10.002268921s] END Feb 16 00:06:45 crc kubenswrapper[4698]: E0216 00:06:45.732026 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 00:06:46 crc kubenswrapper[4698]: I0216 00:06:46.096388 4698 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 00:06:46 crc kubenswrapper[4698]: I0216 00:06:46.096483 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 00:06:46 crc kubenswrapper[4698]: I0216 00:06:46.165196 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 03:11:30.687521457 +0000 UTC Feb 16 00:06:46 crc kubenswrapper[4698]: I0216 00:06:46.352867 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 00:06:46 crc kubenswrapper[4698]: I0216 00:06:46.355360 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f" exitCode=255 Feb 16 00:06:46 crc kubenswrapper[4698]: I0216 00:06:46.355454 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f"} Feb 16 00:06:46 crc kubenswrapper[4698]: I0216 00:06:46.355766 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:46 crc kubenswrapper[4698]: I0216 00:06:46.357577 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:46 crc kubenswrapper[4698]: I0216 00:06:46.357650 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:46 crc kubenswrapper[4698]: I0216 00:06:46.357668 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:46 crc kubenswrapper[4698]: I0216 00:06:46.358649 4698 scope.go:117] "RemoveContainer" containerID="c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f" Feb 16 00:06:46 crc kubenswrapper[4698]: I0216 00:06:46.681092 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:06:47 crc kubenswrapper[4698]: I0216 00:06:47.166240 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 22:58:27.189320294 +0000 UTC Feb 16 00:06:47 crc kubenswrapper[4698]: I0216 00:06:47.362542 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 00:06:47 crc kubenswrapper[4698]: I0216 00:06:47.365358 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30"} Feb 16 00:06:47 crc kubenswrapper[4698]: I0216 00:06:47.365544 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:47 crc kubenswrapper[4698]: I0216 00:06:47.366794 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:47 crc kubenswrapper[4698]: I0216 00:06:47.366859 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:47 crc kubenswrapper[4698]: I0216 00:06:47.366882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:48 crc kubenswrapper[4698]: I0216 00:06:48.166455 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 11:36:01.490051635 +0000 UTC Feb 16 00:06:48 crc kubenswrapper[4698]: I0216 00:06:48.369479 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:48 crc kubenswrapper[4698]: I0216 00:06:48.370017 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:06:48 crc kubenswrapper[4698]: I0216 00:06:48.373891 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:48 crc kubenswrapper[4698]: I0216 00:06:48.373979 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:48 crc kubenswrapper[4698]: I0216 00:06:48.374002 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:48 crc kubenswrapper[4698]: I0216 00:06:48.861084 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:06:48 crc kubenswrapper[4698]: I0216 00:06:48.861387 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:48 crc kubenswrapper[4698]: I0216 00:06:48.863237 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:48 crc kubenswrapper[4698]: I0216 00:06:48.863305 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:48 crc kubenswrapper[4698]: I0216 00:06:48.863316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:49 crc kubenswrapper[4698]: I0216 00:06:49.167877 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 16:36:16.120704212 +0000 UTC Feb 16 00:06:49 crc kubenswrapper[4698]: I0216 00:06:49.372774 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:49 crc kubenswrapper[4698]: I0216 00:06:49.374268 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:49 crc kubenswrapper[4698]: I0216 00:06:49.374343 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:49 crc kubenswrapper[4698]: I0216 00:06:49.374372 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:49 crc kubenswrapper[4698]: I0216 00:06:49.701712 4698 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 00:06:50 crc kubenswrapper[4698]: I0216 00:06:50.168766 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 04:42:28.668119284 +0000 UTC Feb 16 00:06:50 crc kubenswrapper[4698]: I0216 00:06:50.484374 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:06:50 crc kubenswrapper[4698]: I0216 00:06:50.485062 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:50 crc kubenswrapper[4698]: I0216 00:06:50.487202 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:50 crc kubenswrapper[4698]: I0216 00:06:50.487265 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:50 crc kubenswrapper[4698]: I0216 00:06:50.487281 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:50 crc kubenswrapper[4698]: I0216 00:06:50.491133 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.089763 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.092327 4698 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.092465 4698 trace.go:236] Trace[1714245165]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 00:06:40.444) (total time: 10647ms): Feb 16 00:06:51 crc kubenswrapper[4698]: Trace[1714245165]: ---"Objects listed" error: 10647ms (00:06:51.092) Feb 16 00:06:51 crc kubenswrapper[4698]: Trace[1714245165]: [10.64798735s] [10.64798735s] END Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.092480 4698 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.094040 4698 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.095626 4698 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.095858 4698 trace.go:236] Trace[1047945698]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 00:06:38.781) (total time: 12314ms): Feb 16 00:06:51 crc kubenswrapper[4698]: Trace[1047945698]: ---"Objects listed" error: 12314ms (00:06:51.095) Feb 16 00:06:51 crc kubenswrapper[4698]: Trace[1047945698]: [12.314237794s] [12.314237794s] END Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.096099 4698 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.097691 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.203033 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 21:45:36.994944236 +0000 UTC Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.203157 4698 apiserver.go:52] "Watching apiserver" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.205467 4698 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.205869 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.206342 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.206427 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.206346 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.206848 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.206919 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.215890 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.219638 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.219984 4698 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.220025 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.221285 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.221438 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.229528 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.229672 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.231820 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.234895 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.235925 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.236334 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.236607 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.244259 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.244681 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.252901 4698 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.276846 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.293421 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304128 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304209 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304240 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304264 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304483 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304554 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304577 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304595 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304633 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304659 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304677 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304739 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304767 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304795 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304823 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304846 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304864 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304889 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304905 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304935 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.304994 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305048 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305112 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305185 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305221 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305245 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305265 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305307 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305325 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305342 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305362 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305380 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305400 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305421 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305440 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305457 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305497 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305516 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305539 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305581 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305603 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305649 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305675 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305702 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305729 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305755 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305782 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305810 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305839 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305865 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305890 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305918 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305941 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305962 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305979 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305999 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306019 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306042 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306070 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306133 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306163 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306188 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305151 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.306235 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:06:51.806206715 +0000 UTC m=+21.464105577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305184 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305337 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305344 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305589 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305634 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305656 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305867 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.305896 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306082 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306112 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306286 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306546 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306585 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306650 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306682 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306725 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306761 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306791 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306818 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306845 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306873 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306902 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306976 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307018 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307056 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307087 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307118 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307151 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307184 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307215 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307250 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307286 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307323 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307362 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307396 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307423 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307448 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307477 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307508 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307538 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307565 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307594 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307653 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307682 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307713 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307740 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307770 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307799 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307832 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307860 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307888 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307916 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307943 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307973 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308003 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308030 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308068 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308096 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308167 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308200 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308229 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308270 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308325 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308354 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308389 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308417 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308444 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308469 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308499 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308525 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308552 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308578 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308603 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308647 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308685 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308711 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308737 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308768 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308797 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308823 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308853 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308880 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308906 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308931 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308956 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308981 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309010 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309035 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309059 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309085 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309110 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309134 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309159 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309198 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309225 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309460 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309492 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309529 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309557 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309582 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309607 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309660 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306592 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309685 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306588 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309701 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306696 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306719 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306843 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306889 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306945 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.306988 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307121 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307174 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307289 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307346 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.310112 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.310141 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.310157 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.310214 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309714 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.310351 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.310391 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.310412 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.310447 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.310475 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.310506 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.310528 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.310549 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.310603 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.310638 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.310658 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.310676 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.310628 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.310951 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.311062 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.311295 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.311385 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.312182 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.312193 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.308666 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.309593 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.312355 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.307462 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.312344 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.312525 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.313217 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.313290 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.313366 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.313410 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.313753 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.313944 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.313986 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.314335 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.310696 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.314551 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.314642 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.314679 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.314717 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.314749 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.314779 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.314806 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.314835 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.314863 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.314890 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.314916 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.314941 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.314967 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.315149 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.315180 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.315234 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.315272 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.315301 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.315328 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.315360 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.315376 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.315392 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.315558 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.315592 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.315641 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.315650 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.315672 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.315772 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.315807 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.315886 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.315984 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.316068 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.316425 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.316458 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.316482 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.316511 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.316536 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.316560 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.316554 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.316591 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.316655 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.316680 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.316685 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.316702 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.316726 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.316747 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.316769 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.316800 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.316831 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.316977 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.317073 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.317110 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.317268 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.317342 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.317685 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.318268 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.318302 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.318430 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.319069 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.319081 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.319225 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.319293 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.319373 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.319450 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.319835 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.319841 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.319879 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.319962 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.320054 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.320066 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.320142 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.320264 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.320323 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.320516 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.320969 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.321116 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.321217 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.321273 4698 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.321354 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.321382 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 00:06:51.821358459 +0000 UTC m=+21.479257221 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.321432 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.321522 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.321838 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.321857 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.321605 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.321963 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.322013 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.322057 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.322250 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.322362 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.322807 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.322846 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.323129 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.323385 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.324876 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.325084 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.326350 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.327137 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.327192 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.327356 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.327446 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.327560 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 00:06:51.827531893 +0000 UTC m=+21.485430655 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.327868 4698 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.327890 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.327901 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.327913 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.327924 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.327935 4698 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.327947 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.327958 4698 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.327968 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.327979 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.328011 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.328023 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.328862 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.328034 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.328917 4698 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.328929 4698 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.328941 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.328954 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.328967 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.328976 4698 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.328988 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.328999 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.329011 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.329025 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.329037 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.329047 4698 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.329058 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.329069 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.329082 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.329082 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.329095 4698 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.329136 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.329351 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.329412 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.329769 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.330437 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.337976 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.338165 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.327586 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.338296 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.338539 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.338913 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.339105 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.339181 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.339670 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.339900 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.340057 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.340310 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.340334 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.340406 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.340644 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.340661 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.341046 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.341507 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.341750 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.341840 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.342090 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.342163 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.342156 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.342360 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.342404 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.342468 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.342720 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.342963 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.343817 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.343077 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.343149 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.344005 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.344570 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.344637 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.344975 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.345251 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.345296 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.345353 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.345720 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.345804 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.345975 4698 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.346015 4698 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.346043 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.346065 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.346089 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.346111 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.346138 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.346160 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.346184 4698 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.346203 4698 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.346226 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.346245 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.346266 4698 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.346288 4698 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.346311 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.346490 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.346817 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.346850 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.346883 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.346939 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347044 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347189 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347261 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347388 4698 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347411 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347430 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347445 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347077 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347454 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347409 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.347470 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 00:06:51.847441358 +0000 UTC m=+21.505340130 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347492 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347552 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347579 4698 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347601 4698 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347595 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347656 4698 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347765 4698 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347775 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347823 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347850 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.347910 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.348771 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.349030 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.349052 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.349068 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.349134 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 00:06:51.849108411 +0000 UTC m=+21.507007193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.349907 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.350122 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.350193 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.350544 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.351120 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.352841 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.353181 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.355974 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.356066 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.357429 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.359283 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.360831 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.360931 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.361100 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.361920 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.362525 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.362818 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.365403 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.365858 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.365855 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.366119 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.366271 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.366151 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.366191 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.366577 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.366860 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.366903 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.367199 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.367223 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.367352 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.367424 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.367605 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.368602 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.368896 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.370215 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.374450 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.378924 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.382236 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.387627 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.391822 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.400172 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.400312 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.406892 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.418096 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.429877 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.440823 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.448342 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.448513 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.448629 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.448798 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.448885 4698 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.448914 4698 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.448927 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.448938 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.448949 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.448960 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.448969 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.448979 4698 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.448990 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449000 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449009 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449019 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449029 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449038 4698 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449047 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449055 4698 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449074 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449084 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449093 4698 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449102 4698 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449112 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449121 4698 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449131 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449141 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449151 4698 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449164 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449175 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449222 4698 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449234 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449244 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449252 4698 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449261 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449270 4698 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449278 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449287 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449296 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449306 4698 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449315 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449324 4698 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449333 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449342 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449350 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449359 4698 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449368 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449377 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449386 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449396 4698 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449404 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449414 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449425 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449448 4698 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449457 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449466 4698 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449474 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449483 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449491 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449500 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449508 4698 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449539 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449551 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449562 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449573 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449688 4698 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449744 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449757 4698 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449772 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449785 4698 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449821 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449831 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449840 4698 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449851 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449861 4698 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449894 4698 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449906 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449915 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449926 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449936 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449949 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449984 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.449993 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450003 4698 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450013 4698 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450023 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450033 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450068 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450078 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450091 4698 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450101 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450110 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450142 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450152 4698 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450162 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450171 4698 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450181 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450191 4698 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450221 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450232 4698 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450242 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450253 4698 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450264 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450273 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450333 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450347 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450391 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450406 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450418 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450434 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450482 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450497 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450508 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450521 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450533 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450570 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450579 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450588 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450596 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450605 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450644 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450659 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450670 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450681 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450693 4698 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450758 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450768 4698 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450779 4698 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450790 4698 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450802 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450821 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450831 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450841 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450851 4698 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450862 4698 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450871 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450880 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450890 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450899 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450908 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450916 4698 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.450926 4698 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.451061 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.462639 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.550967 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.559104 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.567128 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 00:06:51 crc kubenswrapper[4698]: W0216 00:06:51.576999 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-019e0d578bd81202af3bb721bce6e6205a85552a7e37fb47ab23593b3e2edcb2 WatchSource:0}: Error finding container 019e0d578bd81202af3bb721bce6e6205a85552a7e37fb47ab23593b3e2edcb2: Status 404 returned error can't find the container with id 019e0d578bd81202af3bb721bce6e6205a85552a7e37fb47ab23593b3e2edcb2 Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.854885 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.855095 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:06:52.855062346 +0000 UTC m=+22.512961118 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.855651 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.855748 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.855797 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:06:51 crc kubenswrapper[4698]: I0216 00:06:51.855832 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.855983 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.856006 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.856019 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.856069 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 00:06:52.856057497 +0000 UTC m=+22.513956269 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.856311 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.856364 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 00:06:52.856350206 +0000 UTC m=+22.514248968 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.856410 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.856432 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.856444 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.856537 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 00:06:52.85648234 +0000 UTC m=+22.514381112 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.856540 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:06:51 crc kubenswrapper[4698]: E0216 00:06:51.856751 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 00:06:52.856717288 +0000 UTC m=+22.514616090 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.225748 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 00:34:45.213100184 +0000 UTC Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.237325 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.258399 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.264281 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.277187 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.281563 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.298827 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.317165 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.344759 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.362285 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.376664 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.383989 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a4b2a9bcb70d638563909c0dd8ed9808cc4e6af95b737c3869dae98270e8893a"} Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.386228 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81"} Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.386300 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56"} Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.386318 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"019e0d578bd81202af3bb721bce6e6205a85552a7e37fb47ab23593b3e2edcb2"} Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.388249 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440"} Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.388289 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f1bd37b261c5c88b8f163da2c737d4ae15c5991f2b62d8239d48740b4f94e76c"} Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.406633 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.428592 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.449658 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.467410 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.479709 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.493034 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.505901 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.525566 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.549764 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.559878 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.571496 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.587207 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.605213 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.619503 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.632396 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.648707 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.864933 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.865018 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.865046 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.865069 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:06:52 crc kubenswrapper[4698]: I0216 00:06:52.865091 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:06:52 crc kubenswrapper[4698]: E0216 00:06:52.865224 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:06:52 crc kubenswrapper[4698]: E0216 00:06:52.865258 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:06:52 crc kubenswrapper[4698]: E0216 00:06:52.865271 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:52 crc kubenswrapper[4698]: E0216 00:06:52.865290 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:06:52 crc kubenswrapper[4698]: E0216 00:06:52.865325 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 00:06:54.865310554 +0000 UTC m=+24.523209316 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:52 crc kubenswrapper[4698]: E0216 00:06:52.865431 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:06:52 crc kubenswrapper[4698]: E0216 00:06:52.865480 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:06:54.865432728 +0000 UTC m=+24.523331530 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:06:52 crc kubenswrapper[4698]: E0216 00:06:52.865500 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:06:52 crc kubenswrapper[4698]: E0216 00:06:52.865541 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:52 crc kubenswrapper[4698]: E0216 00:06:52.865587 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 00:06:54.865539892 +0000 UTC m=+24.523438824 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:06:52 crc kubenswrapper[4698]: E0216 00:06:52.865674 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 00:06:54.865654505 +0000 UTC m=+24.523553497 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:52 crc kubenswrapper[4698]: E0216 00:06:52.865696 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:06:52 crc kubenswrapper[4698]: E0216 00:06:52.865818 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 00:06:54.86579686 +0000 UTC m=+24.523695662 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.226400 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 14:46:25.239971941 +0000 UTC Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.230748 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.230806 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:06:53 crc kubenswrapper[4698]: E0216 00:06:53.230906 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.230948 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:06:53 crc kubenswrapper[4698]: E0216 00:06:53.231126 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:06:53 crc kubenswrapper[4698]: E0216 00:06:53.231268 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.237859 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.238941 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.241947 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.243697 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.245355 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.245990 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.246754 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.247892 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.248672 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.250269 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.250948 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.252195 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.252763 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.253357 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.254449 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.255151 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.256315 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.256849 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.257527 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.258839 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.259442 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.260740 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.261221 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.262485 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.263010 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.263900 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.265418 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.269902 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.270731 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.272076 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.272783 4698 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.272919 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.274974 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.276072 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.276780 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.278708 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.280035 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.280937 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.282162 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.283087 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.284228 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.284995 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.286232 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.287054 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.288166 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.288853 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.289903 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.290868 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.291956 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.292586 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.293596 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.294263 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.295011 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 16 00:06:53 crc kubenswrapper[4698]: I0216 00:06:53.296052 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 16 00:06:54 crc kubenswrapper[4698]: I0216 00:06:54.227035 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:20:38.438193694 +0000 UTC Feb 16 00:06:54 crc kubenswrapper[4698]: I0216 00:06:54.884705 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:06:54 crc kubenswrapper[4698]: I0216 00:06:54.884911 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:06:54 crc kubenswrapper[4698]: I0216 00:06:54.885000 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:06:54 crc kubenswrapper[4698]: E0216 00:06:54.885046 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:06:58.884991926 +0000 UTC m=+28.542890718 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:06:54 crc kubenswrapper[4698]: I0216 00:06:54.885135 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:06:54 crc kubenswrapper[4698]: I0216 00:06:54.885216 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:06:54 crc kubenswrapper[4698]: E0216 00:06:54.885292 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:06:54 crc kubenswrapper[4698]: E0216 00:06:54.885315 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:06:54 crc kubenswrapper[4698]: E0216 00:06:54.885355 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:06:54 crc kubenswrapper[4698]: E0216 00:06:54.885406 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 00:06:58.885381807 +0000 UTC m=+28.543280569 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:06:54 crc kubenswrapper[4698]: E0216 00:06:54.885407 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:54 crc kubenswrapper[4698]: E0216 00:06:54.885505 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:06:54 crc kubenswrapper[4698]: E0216 00:06:54.885543 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:06:54 crc kubenswrapper[4698]: E0216 00:06:54.885564 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:54 crc kubenswrapper[4698]: E0216 00:06:54.885410 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:06:54 crc kubenswrapper[4698]: E0216 00:06:54.885540 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 00:06:58.885510651 +0000 UTC m=+28.543409453 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:54 crc kubenswrapper[4698]: E0216 00:06:54.885752 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 00:06:58.885677197 +0000 UTC m=+28.543575989 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:54 crc kubenswrapper[4698]: E0216 00:06:54.885797 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 00:06:58.88578486 +0000 UTC m=+28.543683662 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:06:55 crc kubenswrapper[4698]: I0216 00:06:55.228110 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 05:43:26.437871455 +0000 UTC Feb 16 00:06:55 crc kubenswrapper[4698]: I0216 00:06:55.231685 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:06:55 crc kubenswrapper[4698]: I0216 00:06:55.231734 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:06:55 crc kubenswrapper[4698]: I0216 00:06:55.231836 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:06:55 crc kubenswrapper[4698]: E0216 00:06:55.231877 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:06:55 crc kubenswrapper[4698]: E0216 00:06:55.232095 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:06:55 crc kubenswrapper[4698]: E0216 00:06:55.232213 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:06:55 crc kubenswrapper[4698]: I0216 00:06:55.401010 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde"} Feb 16 00:06:55 crc kubenswrapper[4698]: I0216 00:06:55.431961 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:55 crc kubenswrapper[4698]: I0216 00:06:55.448232 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:55 crc kubenswrapper[4698]: I0216 00:06:55.462505 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:55 crc kubenswrapper[4698]: I0216 00:06:55.478931 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:55 crc kubenswrapper[4698]: I0216 00:06:55.493296 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:55 crc kubenswrapper[4698]: I0216 00:06:55.517751 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:55 crc kubenswrapper[4698]: I0216 00:06:55.533498 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:55 crc kubenswrapper[4698]: I0216 00:06:55.555678 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:56 crc kubenswrapper[4698]: I0216 00:06:56.228963 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 00:39:42.306635971 +0000 UTC Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.229905 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 14:40:01.170986864 +0000 UTC Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.231301 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.231381 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.231301 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:06:57 crc kubenswrapper[4698]: E0216 00:06:57.231526 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:06:57 crc kubenswrapper[4698]: E0216 00:06:57.231756 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:06:57 crc kubenswrapper[4698]: E0216 00:06:57.231907 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.359336 4698 csr.go:261] certificate signing request csr-gbfsg is approved, waiting to be issued Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.380455 4698 csr.go:257] certificate signing request csr-gbfsg is issued Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.411262 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-256rr"] Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.411730 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-256rr" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.415878 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.416454 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.416518 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.440556 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.444633 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-m9h8s"] Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.445072 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-m9h8s" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.447311 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.447451 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.447568 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.447801 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.471693 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.486321 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.498288 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.498456 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.500218 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.500258 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.500272 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.500410 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.508085 4698 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.508303 4698 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.511526 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9rrq\" (UniqueName: \"kubernetes.io/projected/24906ea4-6ef0-4686-a810-4f6da05061f7-kube-api-access-k9rrq\") pod \"node-resolver-256rr\" (UID: \"24906ea4-6ef0-4686-a810-4f6da05061f7\") " pod="openshift-dns/node-resolver-256rr" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.511574 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e7e45b5f-053d-4598-bdfc-cdf6903bf4b1-serviceca\") pod \"node-ca-m9h8s\" (UID: \"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\") " pod="openshift-image-registry/node-ca-m9h8s" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.512573 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r77mx\" (UniqueName: \"kubernetes.io/projected/e7e45b5f-053d-4598-bdfc-cdf6903bf4b1-kube-api-access-r77mx\") pod \"node-ca-m9h8s\" (UID: \"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\") " pod="openshift-image-registry/node-ca-m9h8s" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.512679 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e7e45b5f-053d-4598-bdfc-cdf6903bf4b1-host\") pod \"node-ca-m9h8s\" (UID: \"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\") " pod="openshift-image-registry/node-ca-m9h8s" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.512903 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/24906ea4-6ef0-4686-a810-4f6da05061f7-hosts-file\") pod \"node-resolver-256rr\" (UID: \"24906ea4-6ef0-4686-a810-4f6da05061f7\") " pod="openshift-dns/node-resolver-256rr" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.512937 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.512978 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.512991 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.513013 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.513028 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:57Z","lastTransitionTime":"2026-02-16T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.517921 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: E0216 00:06:57.576655 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.579606 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.582542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.582578 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.582591 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.582624 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.582637 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:57Z","lastTransitionTime":"2026-02-16T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.614360 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9rrq\" (UniqueName: \"kubernetes.io/projected/24906ea4-6ef0-4686-a810-4f6da05061f7-kube-api-access-k9rrq\") pod \"node-resolver-256rr\" (UID: \"24906ea4-6ef0-4686-a810-4f6da05061f7\") " pod="openshift-dns/node-resolver-256rr" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.614403 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e7e45b5f-053d-4598-bdfc-cdf6903bf4b1-serviceca\") pod \"node-ca-m9h8s\" (UID: \"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\") " pod="openshift-image-registry/node-ca-m9h8s" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.614430 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r77mx\" (UniqueName: \"kubernetes.io/projected/e7e45b5f-053d-4598-bdfc-cdf6903bf4b1-kube-api-access-r77mx\") pod \"node-ca-m9h8s\" (UID: \"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\") " pod="openshift-image-registry/node-ca-m9h8s" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.614456 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e7e45b5f-053d-4598-bdfc-cdf6903bf4b1-host\") pod \"node-ca-m9h8s\" (UID: \"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\") " pod="openshift-image-registry/node-ca-m9h8s" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.614485 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/24906ea4-6ef0-4686-a810-4f6da05061f7-hosts-file\") pod \"node-resolver-256rr\" (UID: \"24906ea4-6ef0-4686-a810-4f6da05061f7\") " pod="openshift-dns/node-resolver-256rr" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.614686 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/24906ea4-6ef0-4686-a810-4f6da05061f7-hosts-file\") pod \"node-resolver-256rr\" (UID: \"24906ea4-6ef0-4686-a810-4f6da05061f7\") " pod="openshift-dns/node-resolver-256rr" Feb 16 00:06:57 crc kubenswrapper[4698]: E0216 00:06:57.614652 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.614926 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e7e45b5f-053d-4598-bdfc-cdf6903bf4b1-host\") pod \"node-ca-m9h8s\" (UID: \"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\") " pod="openshift-image-registry/node-ca-m9h8s" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.616470 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e7e45b5f-053d-4598-bdfc-cdf6903bf4b1-serviceca\") pod \"node-ca-m9h8s\" (UID: \"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\") " pod="openshift-image-registry/node-ca-m9h8s" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.617472 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.619801 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.619858 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.619871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.619892 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.619907 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:57Z","lastTransitionTime":"2026-02-16T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:57 crc kubenswrapper[4698]: E0216 00:06:57.641507 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.641841 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9rrq\" (UniqueName: \"kubernetes.io/projected/24906ea4-6ef0-4686-a810-4f6da05061f7-kube-api-access-k9rrq\") pod \"node-resolver-256rr\" (UID: \"24906ea4-6ef0-4686-a810-4f6da05061f7\") " pod="openshift-dns/node-resolver-256rr" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.641897 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r77mx\" (UniqueName: \"kubernetes.io/projected/e7e45b5f-053d-4598-bdfc-cdf6903bf4b1-kube-api-access-r77mx\") pod \"node-ca-m9h8s\" (UID: \"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\") " pod="openshift-image-registry/node-ca-m9h8s" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.648406 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.648452 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.648469 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.648495 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.648510 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:57Z","lastTransitionTime":"2026-02-16T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.654108 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: E0216 00:06:57.662859 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.666289 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.666558 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.666588 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.666600 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.666631 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.666642 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:57Z","lastTransitionTime":"2026-02-16T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:57 crc kubenswrapper[4698]: E0216 00:06:57.679462 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: E0216 00:06:57.679583 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.681331 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.681377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.681389 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.681408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.681419 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:57Z","lastTransitionTime":"2026-02-16T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.689117 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.705950 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.722778 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.724817 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-256rr" Feb 16 00:06:57 crc kubenswrapper[4698]: W0216 00:06:57.741076 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24906ea4_6ef0_4686_a810_4f6da05061f7.slice/crio-049f05f67faaa4441a88226fc4f226412a51fdbca5539583765c73d88e0089d1 WatchSource:0}: Error finding container 049f05f67faaa4441a88226fc4f226412a51fdbca5539583765c73d88e0089d1: Status 404 returned error can't find the container with id 049f05f67faaa4441a88226fc4f226412a51fdbca5539583765c73d88e0089d1 Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.743272 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.757998 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.759146 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-m9h8s" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.775105 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.783494 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.783527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.783539 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.783558 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.783571 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:57Z","lastTransitionTime":"2026-02-16T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.793481 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.809205 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.827951 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.845909 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-z56m2"] Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.850792 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.853678 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.853686 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.853824 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.853681 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.855947 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.856015 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.870924 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.886749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.886791 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.886803 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.886820 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.886830 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:57Z","lastTransitionTime":"2026-02-16T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.888286 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.899886 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.916726 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b351654-277f-4d0d-84f9-b003f934936c-mcd-auth-proxy-config\") pod \"machine-config-daemon-z56m2\" (UID: \"7b351654-277f-4d0d-84f9-b003f934936c\") " pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.916765 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5rp6\" (UniqueName: \"kubernetes.io/projected/7b351654-277f-4d0d-84f9-b003f934936c-kube-api-access-v5rp6\") pod \"machine-config-daemon-z56m2\" (UID: \"7b351654-277f-4d0d-84f9-b003f934936c\") " pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.916782 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7b351654-277f-4d0d-84f9-b003f934936c-rootfs\") pod \"machine-config-daemon-z56m2\" (UID: \"7b351654-277f-4d0d-84f9-b003f934936c\") " pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.916801 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7b351654-277f-4d0d-84f9-b003f934936c-proxy-tls\") pod \"machine-config-daemon-z56m2\" (UID: \"7b351654-277f-4d0d-84f9-b003f934936c\") " pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.929814 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.946764 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.962092 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.976692 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.988432 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:57Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.989867 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.989899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.989911 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.989928 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:57 crc kubenswrapper[4698]: I0216 00:06:57.989944 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:57Z","lastTransitionTime":"2026-02-16T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.003258 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.018041 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b351654-277f-4d0d-84f9-b003f934936c-mcd-auth-proxy-config\") pod \"machine-config-daemon-z56m2\" (UID: \"7b351654-277f-4d0d-84f9-b003f934936c\") " pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.018100 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5rp6\" (UniqueName: \"kubernetes.io/projected/7b351654-277f-4d0d-84f9-b003f934936c-kube-api-access-v5rp6\") pod \"machine-config-daemon-z56m2\" (UID: \"7b351654-277f-4d0d-84f9-b003f934936c\") " pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.018122 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7b351654-277f-4d0d-84f9-b003f934936c-rootfs\") pod \"machine-config-daemon-z56m2\" (UID: \"7b351654-277f-4d0d-84f9-b003f934936c\") " pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.018146 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7b351654-277f-4d0d-84f9-b003f934936c-proxy-tls\") pod \"machine-config-daemon-z56m2\" (UID: \"7b351654-277f-4d0d-84f9-b003f934936c\") " pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.018662 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7b351654-277f-4d0d-84f9-b003f934936c-rootfs\") pod \"machine-config-daemon-z56m2\" (UID: \"7b351654-277f-4d0d-84f9-b003f934936c\") " pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.019072 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b351654-277f-4d0d-84f9-b003f934936c-mcd-auth-proxy-config\") pod \"machine-config-daemon-z56m2\" (UID: \"7b351654-277f-4d0d-84f9-b003f934936c\") " pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.023211 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.023327 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7b351654-277f-4d0d-84f9-b003f934936c-proxy-tls\") pod \"machine-config-daemon-z56m2\" (UID: \"7b351654-277f-4d0d-84f9-b003f934936c\") " pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.040989 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5rp6\" (UniqueName: \"kubernetes.io/projected/7b351654-277f-4d0d-84f9-b003f934936c-kube-api-access-v5rp6\") pod \"machine-config-daemon-z56m2\" (UID: \"7b351654-277f-4d0d-84f9-b003f934936c\") " pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.042280 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.092887 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.092937 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.092947 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.092963 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.092976 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:58Z","lastTransitionTime":"2026-02-16T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.166499 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:06:58 crc kubenswrapper[4698]: W0216 00:06:58.181354 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b351654_277f_4d0d_84f9_b003f934936c.slice/crio-dcd67cab64fbdee2a6b414f7ffbf2433bf8e40d75f5b14bdb1d76930e097d776 WatchSource:0}: Error finding container dcd67cab64fbdee2a6b414f7ffbf2433bf8e40d75f5b14bdb1d76930e097d776: Status 404 returned error can't find the container with id dcd67cab64fbdee2a6b414f7ffbf2433bf8e40d75f5b14bdb1d76930e097d776 Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.195696 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.195745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.195761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.195779 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.195790 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:58Z","lastTransitionTime":"2026-02-16T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.229289 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.230050 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 10:21:57.472116232 +0000 UTC Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.235633 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.246984 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rs8xm"] Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.247674 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.253530 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rmrt5"] Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.259459 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.260185 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.260234 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.260235 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.260801 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.261709 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-2dv2d"] Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.262074 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.262405 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.262540 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.264839 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.266899 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.267171 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.267205 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.267289 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.267484 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.267737 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.271486 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.271518 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.271864 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.305008 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.308433 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.308477 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.308488 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.308504 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.308515 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:58Z","lastTransitionTime":"2026-02-16T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.319391 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.320573 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-cni-netd\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.320631 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-ovnkube-script-lib\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.320664 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-host-var-lib-kubelet\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.320690 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-log-socket\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.320712 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.320740 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbd26\" (UniqueName: \"kubernetes.io/projected/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-kube-api-access-kbd26\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.320763 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-multus-cni-dir\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.320789 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-hostroot\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.320817 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-host-run-multus-certs\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.320842 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38c8dc67-ba64-4599-a153-2e1b9b6627b6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.320870 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-slash\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.320894 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-var-lib-openvswitch\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321020 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38c8dc67-ba64-4599-a153-2e1b9b6627b6-cnibin\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321088 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vsqg\" (UniqueName: \"kubernetes.io/projected/69838a3a-c20d-4770-b95f-ab85a265d53c-kube-api-access-5vsqg\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321123 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-run-ovn-kubernetes\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321143 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/69838a3a-c20d-4770-b95f-ab85a265d53c-cni-binary-copy\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321165 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-host-var-lib-cni-bin\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321187 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-etc-kubernetes\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321234 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-ovn-node-metrics-cert\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321253 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-cnibin\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321271 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-os-release\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321287 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38c8dc67-ba64-4599-a153-2e1b9b6627b6-system-cni-dir\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321314 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-run-systemd\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321331 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-etc-openvswitch\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321386 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-node-log\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321407 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-multus-socket-dir-parent\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321441 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/69838a3a-c20d-4770-b95f-ab85a265d53c-multus-daemon-config\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321460 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38c8dc67-ba64-4599-a153-2e1b9b6627b6-os-release\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321600 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-env-overrides\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321724 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2rgg\" (UniqueName: \"kubernetes.io/projected/38c8dc67-ba64-4599-a153-2e1b9b6627b6-kube-api-access-v2rgg\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321795 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/38c8dc67-ba64-4599-a153-2e1b9b6627b6-cni-binary-copy\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321834 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-systemd-units\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321868 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-host-run-k8s-cni-cncf-io\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321893 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-host-run-netns\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321914 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-multus-conf-dir\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321932 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/38c8dc67-ba64-4599-a153-2e1b9b6627b6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321961 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-kubelet\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.321981 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-run-openvswitch\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.322034 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-host-var-lib-cni-multus\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.322063 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-run-ovn\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.322081 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-cni-bin\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.322101 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-run-netns\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.322120 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-ovnkube-config\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.322141 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-system-cni-dir\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.334380 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.352925 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.368741 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.381682 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-16 00:01:57 +0000 UTC, rotation deadline is 2026-12-03 00:43:39.882086161 +0000 UTC Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.381762 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6960h36m41.500326747s for next certificate rotation Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.391346 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.404190 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.411779 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-256rr" event={"ID":"24906ea4-6ef0-4686-a810-4f6da05061f7","Type":"ContainerStarted","Data":"645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6"} Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.411843 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-256rr" event={"ID":"24906ea4-6ef0-4686-a810-4f6da05061f7","Type":"ContainerStarted","Data":"049f05f67faaa4441a88226fc4f226412a51fdbca5539583765c73d88e0089d1"} Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.412213 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.412323 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.412341 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.412389 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.412409 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:58Z","lastTransitionTime":"2026-02-16T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.414775 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" event={"ID":"7b351654-277f-4d0d-84f9-b003f934936c","Type":"ContainerStarted","Data":"d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4"} Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.414838 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" event={"ID":"7b351654-277f-4d0d-84f9-b003f934936c","Type":"ContainerStarted","Data":"ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8"} Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.414875 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" event={"ID":"7b351654-277f-4d0d-84f9-b003f934936c","Type":"ContainerStarted","Data":"dcd67cab64fbdee2a6b414f7ffbf2433bf8e40d75f5b14bdb1d76930e097d776"} Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.416204 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-m9h8s" event={"ID":"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1","Type":"ContainerStarted","Data":"fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353"} Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.416250 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-m9h8s" event={"ID":"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1","Type":"ContainerStarted","Data":"9449e3a1f24d6b09ae00d99799e6bbeaa8850d9d8c48c1bfb655b2042373cee5"} Feb 16 00:06:58 crc kubenswrapper[4698]: E0216 00:06:58.422928 4698 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.423183 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2rgg\" (UniqueName: \"kubernetes.io/projected/38c8dc67-ba64-4599-a153-2e1b9b6627b6-kube-api-access-v2rgg\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.423294 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-env-overrides\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.423379 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/38c8dc67-ba64-4599-a153-2e1b9b6627b6-cni-binary-copy\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.423462 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/38c8dc67-ba64-4599-a153-2e1b9b6627b6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.423576 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-systemd-units\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.423691 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-host-run-k8s-cni-cncf-io\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.423758 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-host-run-k8s-cni-cncf-io\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.423785 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-systemd-units\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.423889 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-host-run-netns\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.423967 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-multus-conf-dir\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.424046 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-multus-conf-dir\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.423925 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-host-run-netns\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.424167 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-kubelet\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.424283 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-run-openvswitch\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.424377 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-run-openvswitch\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.424375 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-env-overrides\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.424175 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-kubelet\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.424346 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/38c8dc67-ba64-4599-a153-2e1b9b6627b6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.424475 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/38c8dc67-ba64-4599-a153-2e1b9b6627b6-cni-binary-copy\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.424503 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-host-var-lib-cni-multus\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.424401 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-host-var-lib-cni-multus\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.424969 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-run-ovn\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425082 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-run-ovn\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425089 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-cni-bin\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425148 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-run-netns\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425171 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-ovnkube-config\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425190 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-system-cni-dir\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425212 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-host-var-lib-kubelet\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425236 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-cni-netd\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425191 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-run-netns\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425254 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-ovnkube-script-lib\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425301 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbd26\" (UniqueName: \"kubernetes.io/projected/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-kube-api-access-kbd26\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425326 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-multus-cni-dir\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425332 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-host-var-lib-kubelet\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425345 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-log-socket\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425445 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-system-cni-dir\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425367 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-log-socket\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425485 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425350 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-cni-netd\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425526 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-host-run-multus-certs\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425580 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-host-run-multus-certs\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425523 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425581 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38c8dc67-ba64-4599-a153-2e1b9b6627b6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425697 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-multus-cni-dir\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425698 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-hostroot\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425729 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-hostroot\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425751 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-slash\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425754 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-ovnkube-config\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425774 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-var-lib-openvswitch\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425807 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-slash\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425818 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-ovnkube-script-lib\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425809 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38c8dc67-ba64-4599-a153-2e1b9b6627b6-cnibin\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425835 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-var-lib-openvswitch\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425860 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/69838a3a-c20d-4770-b95f-ab85a265d53c-cni-binary-copy\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425839 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38c8dc67-ba64-4599-a153-2e1b9b6627b6-cnibin\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425911 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-host-var-lib-cni-bin\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425933 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-etc-kubernetes\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425955 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vsqg\" (UniqueName: \"kubernetes.io/projected/69838a3a-c20d-4770-b95f-ab85a265d53c-kube-api-access-5vsqg\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425965 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-host-var-lib-cni-bin\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425979 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-run-ovn-kubernetes\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426005 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-run-ovn-kubernetes\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426009 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-cnibin\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.425980 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-etc-kubernetes\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426032 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-os-release\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426062 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-cnibin\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426077 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-ovn-node-metrics-cert\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426100 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-etc-openvswitch\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426116 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-os-release\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426143 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-node-log\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426119 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-node-log\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426187 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-multus-socket-dir-parent\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426196 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-etc-openvswitch\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426209 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38c8dc67-ba64-4599-a153-2e1b9b6627b6-system-cni-dir\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426237 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-run-systemd\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426254 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/69838a3a-c20d-4770-b95f-ab85a265d53c-multus-socket-dir-parent\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426264 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38c8dc67-ba64-4599-a153-2e1b9b6627b6-system-cni-dir\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426260 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/69838a3a-c20d-4770-b95f-ab85a265d53c-multus-daemon-config\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426286 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-run-systemd\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426307 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38c8dc67-ba64-4599-a153-2e1b9b6627b6-os-release\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426381 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38c8dc67-ba64-4599-a153-2e1b9b6627b6-os-release\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426662 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/69838a3a-c20d-4770-b95f-ab85a265d53c-cni-binary-copy\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426733 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-cni-bin\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.426955 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/69838a3a-c20d-4770-b95f-ab85a265d53c-multus-daemon-config\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.427011 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38c8dc67-ba64-4599-a153-2e1b9b6627b6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.431541 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.432056 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-ovn-node-metrics-cert\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.445165 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbd26\" (UniqueName: \"kubernetes.io/projected/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-kube-api-access-kbd26\") pod \"ovnkube-node-rmrt5\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.446090 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vsqg\" (UniqueName: \"kubernetes.io/projected/69838a3a-c20d-4770-b95f-ab85a265d53c-kube-api-access-5vsqg\") pod \"multus-2dv2d\" (UID: \"69838a3a-c20d-4770-b95f-ab85a265d53c\") " pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.447035 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2rgg\" (UniqueName: \"kubernetes.io/projected/38c8dc67-ba64-4599-a153-2e1b9b6627b6-kube-api-access-v2rgg\") pod \"multus-additional-cni-plugins-rs8xm\" (UID: \"38c8dc67-ba64-4599-a153-2e1b9b6627b6\") " pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.447167 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.467370 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.482560 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.499304 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.515256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.515317 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.515330 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.515350 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.515724 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:58Z","lastTransitionTime":"2026-02-16T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.519690 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.535835 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.551916 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.563225 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.568978 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" Feb 16 00:06:58 crc kubenswrapper[4698]: W0216 00:06:58.580530 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38c8dc67_ba64_4599_a153_2e1b9b6627b6.slice/crio-1cdeb174c0e309824090f86a25e9f4ee8d4e9dff1489a0b275995e7cbef02518 WatchSource:0}: Error finding container 1cdeb174c0e309824090f86a25e9f4ee8d4e9dff1489a0b275995e7cbef02518: Status 404 returned error can't find the container with id 1cdeb174c0e309824090f86a25e9f4ee8d4e9dff1489a0b275995e7cbef02518 Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.581598 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.590137 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.592818 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2dv2d" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.611572 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.619083 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.619128 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.619142 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.619163 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.619174 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:58Z","lastTransitionTime":"2026-02-16T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.627211 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.641529 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.657439 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.674526 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.689250 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.699785 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.714037 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:58Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.725964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.726008 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.726020 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.726041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.726051 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:58Z","lastTransitionTime":"2026-02-16T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.829254 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.829689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.829705 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.829727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.829741 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:58Z","lastTransitionTime":"2026-02-16T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.931574 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.931604 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.931628 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.931645 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.931654 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:58Z","lastTransitionTime":"2026-02-16T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.933381 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.933470 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.933496 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.933519 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:06:58 crc kubenswrapper[4698]: I0216 00:06:58.933548 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:06:58 crc kubenswrapper[4698]: E0216 00:06:58.933635 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:06:58 crc kubenswrapper[4698]: E0216 00:06:58.933679 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 00:07:06.93366711 +0000 UTC m=+36.591565872 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:06:58 crc kubenswrapper[4698]: E0216 00:06:58.933951 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:07:06.933928978 +0000 UTC m=+36.591827740 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:06:58 crc kubenswrapper[4698]: E0216 00:06:58.934030 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:06:58 crc kubenswrapper[4698]: E0216 00:06:58.934046 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:06:58 crc kubenswrapper[4698]: E0216 00:06:58.934056 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:58 crc kubenswrapper[4698]: E0216 00:06:58.934084 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 00:07:06.934076923 +0000 UTC m=+36.591975685 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:58 crc kubenswrapper[4698]: E0216 00:06:58.934134 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:06:58 crc kubenswrapper[4698]: E0216 00:06:58.934144 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:06:58 crc kubenswrapper[4698]: E0216 00:06:58.934151 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:58 crc kubenswrapper[4698]: E0216 00:06:58.934174 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 00:07:06.934165726 +0000 UTC m=+36.592064488 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:06:58 crc kubenswrapper[4698]: E0216 00:06:58.934213 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:06:58 crc kubenswrapper[4698]: E0216 00:06:58.934234 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 00:07:06.934229218 +0000 UTC m=+36.592127980 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.034330 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.034366 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.034376 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.034397 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.034407 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:59Z","lastTransitionTime":"2026-02-16T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.137134 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.137186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.137197 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.137217 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.137232 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:59Z","lastTransitionTime":"2026-02-16T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.230542 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:44:58.917929175 +0000 UTC Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.230715 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.230768 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.230747 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:06:59 crc kubenswrapper[4698]: E0216 00:06:59.230905 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:06:59 crc kubenswrapper[4698]: E0216 00:06:59.231124 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:06:59 crc kubenswrapper[4698]: E0216 00:06:59.231055 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.239636 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.239670 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.239682 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.239697 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.239710 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:59Z","lastTransitionTime":"2026-02-16T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.343681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.343711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.343723 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.343739 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.343750 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:59Z","lastTransitionTime":"2026-02-16T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.421587 4698 generic.go:334] "Generic (PLEG): container finished" podID="38c8dc67-ba64-4599-a153-2e1b9b6627b6" containerID="989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7" exitCode=0 Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.421659 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" event={"ID":"38c8dc67-ba64-4599-a153-2e1b9b6627b6","Type":"ContainerDied","Data":"989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7"} Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.421706 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" event={"ID":"38c8dc67-ba64-4599-a153-2e1b9b6627b6","Type":"ContainerStarted","Data":"1cdeb174c0e309824090f86a25e9f4ee8d4e9dff1489a0b275995e7cbef02518"} Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.423203 4698 generic.go:334] "Generic (PLEG): container finished" podID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerID="bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c" exitCode=0 Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.423259 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerDied","Data":"bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c"} Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.423315 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerStarted","Data":"2a6b2009aec728237922dcd0d0eedf86aec7ca2849f56baf5f94648de61d1adf"} Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.428062 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dv2d" event={"ID":"69838a3a-c20d-4770-b95f-ab85a265d53c","Type":"ContainerStarted","Data":"3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada"} Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.428127 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dv2d" event={"ID":"69838a3a-c20d-4770-b95f-ab85a265d53c","Type":"ContainerStarted","Data":"d1df417d72d0787418f08e2c67d841f82cfd0d09c4a8b870d5d353eeacbca583"} Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.443047 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.446122 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.446154 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.446167 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.446186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.446197 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:59Z","lastTransitionTime":"2026-02-16T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.463801 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.482724 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.502629 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.515939 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.535930 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.548774 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.552589 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.552702 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.552720 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.553354 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.553413 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:59Z","lastTransitionTime":"2026-02-16T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.571031 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.585836 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.600357 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.615230 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.628894 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.645396 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.657234 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.657265 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.657274 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.657291 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.657300 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:59Z","lastTransitionTime":"2026-02-16T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.662904 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.673994 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.689258 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.704580 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.716517 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.733826 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.754167 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.759578 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.759634 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.759648 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.759672 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.759686 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:59Z","lastTransitionTime":"2026-02-16T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.771250 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.789247 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.806386 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.837449 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.855885 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.862417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.862829 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.862839 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.862866 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.862878 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:59Z","lastTransitionTime":"2026-02-16T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.881682 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.894010 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.907168 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.920532 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.936723 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.969696 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.969878 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.969941 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.970003 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:06:59 crc kubenswrapper[4698]: I0216 00:06:59.970065 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:06:59Z","lastTransitionTime":"2026-02-16T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.072435 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.072489 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.072507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.072527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.072541 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:00Z","lastTransitionTime":"2026-02-16T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.176062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.176114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.176127 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.176144 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.176160 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:00Z","lastTransitionTime":"2026-02-16T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.231683 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 17:07:17.14130942 +0000 UTC Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.278920 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.278970 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.278985 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.279047 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.279077 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:00Z","lastTransitionTime":"2026-02-16T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.382002 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.382058 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.382070 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.382089 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.382101 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:00Z","lastTransitionTime":"2026-02-16T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.436656 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerStarted","Data":"88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b"} Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.436719 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerStarted","Data":"849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703"} Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.436732 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerStarted","Data":"234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9"} Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.436743 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerStarted","Data":"dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50"} Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.436756 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerStarted","Data":"6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef"} Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.436766 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerStarted","Data":"201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b"} Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.439707 4698 generic.go:334] "Generic (PLEG): container finished" podID="38c8dc67-ba64-4599-a153-2e1b9b6627b6" containerID="e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0" exitCode=0 Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.439737 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" event={"ID":"38c8dc67-ba64-4599-a153-2e1b9b6627b6","Type":"ContainerDied","Data":"e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0"} Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.473717 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.485004 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.485049 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.485066 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.485090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.485111 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:00Z","lastTransitionTime":"2026-02-16T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.487010 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.508295 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.528206 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.546962 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.561231 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.582205 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.587664 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.587699 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.587717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.587741 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.587758 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:00Z","lastTransitionTime":"2026-02-16T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.600940 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.619978 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.637692 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.653604 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.670348 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.690528 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.690989 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.691023 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.691039 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.691066 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.691081 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:00Z","lastTransitionTime":"2026-02-16T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.708967 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.725048 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.794299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.794774 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.794939 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.795103 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.795246 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:00Z","lastTransitionTime":"2026-02-16T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.898994 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.899051 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.899067 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.899092 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.899107 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:00Z","lastTransitionTime":"2026-02-16T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.937745 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.957417 4698 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 00:07:00 crc kubenswrapper[4698]: I0216 00:07:00.958893 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/iptables-alerter-4ln5h/status\": read tcp 38.102.83.143:50026->38.102.83.143:6443: use of closed network connection" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.002719 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.002794 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.002813 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.002841 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.002866 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:01Z","lastTransitionTime":"2026-02-16T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.008544 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.027483 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.047135 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.065489 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.084955 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.101491 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.106081 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.106139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.106159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.106194 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.106220 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:01Z","lastTransitionTime":"2026-02-16T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.119651 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.144277 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.164161 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.182174 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.200161 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.211236 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.211309 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.211328 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.211356 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.211374 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:01Z","lastTransitionTime":"2026-02-16T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.226402 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.230955 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.230990 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.231166 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:01 crc kubenswrapper[4698]: E0216 00:07:01.231155 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:01 crc kubenswrapper[4698]: E0216 00:07:01.231274 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:01 crc kubenswrapper[4698]: E0216 00:07:01.231343 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.232846 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 03:56:52.192948217 +0000 UTC Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.241537 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.299873 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.315229 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.315284 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.315297 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.315320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.315333 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:01Z","lastTransitionTime":"2026-02-16T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.341575 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.352774 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.372773 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.387779 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.401022 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.414127 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.418335 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.418379 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.418389 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.418407 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.418417 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:01Z","lastTransitionTime":"2026-02-16T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.429576 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.444544 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.445311 4698 generic.go:334] "Generic (PLEG): container finished" podID="38c8dc67-ba64-4599-a153-2e1b9b6627b6" containerID="282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27" exitCode=0 Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.445373 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" event={"ID":"38c8dc67-ba64-4599-a153-2e1b9b6627b6","Type":"ContainerDied","Data":"282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27"} Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.460131 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.472235 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.489704 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.507714 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.522104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.522158 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.522170 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.522189 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.522201 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:01Z","lastTransitionTime":"2026-02-16T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.524839 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.540828 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.558686 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.575209 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.590184 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.602925 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.617315 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.625344 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.625398 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.625413 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.625434 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.625445 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:01Z","lastTransitionTime":"2026-02-16T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.637208 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.661489 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.673854 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.708640 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.727760 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.727814 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.727826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.727845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.727856 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:01Z","lastTransitionTime":"2026-02-16T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.745167 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.784971 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.825947 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.830907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.830940 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.830950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.830966 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.830976 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:01Z","lastTransitionTime":"2026-02-16T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.869496 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.909814 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.933641 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.933675 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.933686 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.933705 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.933716 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:01Z","lastTransitionTime":"2026-02-16T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.948548 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:01 crc kubenswrapper[4698]: I0216 00:07:01.986951 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.036463 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.036502 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.036512 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.036531 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.036543 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:02Z","lastTransitionTime":"2026-02-16T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.139075 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.139122 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.139136 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.139155 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.139167 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:02Z","lastTransitionTime":"2026-02-16T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.233334 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 11:47:09.240200445 +0000 UTC Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.241563 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.241609 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.241644 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.241670 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.241684 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:02Z","lastTransitionTime":"2026-02-16T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.343824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.343882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.343896 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.343919 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.343932 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:02Z","lastTransitionTime":"2026-02-16T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.447996 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.448064 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.448091 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.448123 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.448147 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:02Z","lastTransitionTime":"2026-02-16T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.453156 4698 generic.go:334] "Generic (PLEG): container finished" podID="38c8dc67-ba64-4599-a153-2e1b9b6627b6" containerID="d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec" exitCode=0 Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.453192 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" event={"ID":"38c8dc67-ba64-4599-a153-2e1b9b6627b6","Type":"ContainerDied","Data":"d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec"} Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.483589 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.511022 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.522106 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.544499 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.551698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.551743 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.551756 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.551775 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.551788 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:02Z","lastTransitionTime":"2026-02-16T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.560246 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.574882 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.590528 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.603325 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.618160 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.634882 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.652291 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.655066 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.655353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.655453 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.655559 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.655680 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:02Z","lastTransitionTime":"2026-02-16T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.666798 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.683979 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.700982 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.714923 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.758828 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.758882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.758897 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.758925 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.758942 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:02Z","lastTransitionTime":"2026-02-16T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.862257 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.862538 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.862647 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.862730 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.862797 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:02Z","lastTransitionTime":"2026-02-16T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.966441 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.966800 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.966812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.966831 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:02 crc kubenswrapper[4698]: I0216 00:07:02.966841 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:02Z","lastTransitionTime":"2026-02-16T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.069688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.069733 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.069744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.069761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.069771 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:03Z","lastTransitionTime":"2026-02-16T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.172717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.172763 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.172774 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.172794 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.172808 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:03Z","lastTransitionTime":"2026-02-16T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.231207 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.231298 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:03 crc kubenswrapper[4698]: E0216 00:07:03.231383 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.231463 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:03 crc kubenswrapper[4698]: E0216 00:07:03.231741 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:03 crc kubenswrapper[4698]: E0216 00:07:03.231881 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.233534 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 15:59:30.594842196 +0000 UTC Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.276172 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.276230 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.276244 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.276277 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.276291 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:03Z","lastTransitionTime":"2026-02-16T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.379794 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.379850 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.379863 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.379887 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.379900 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:03Z","lastTransitionTime":"2026-02-16T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.469443 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerStarted","Data":"3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb"} Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.473657 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" event={"ID":"38c8dc67-ba64-4599-a153-2e1b9b6627b6","Type":"ContainerStarted","Data":"49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f"} Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.483079 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.483116 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.483130 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.483151 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.483165 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:03Z","lastTransitionTime":"2026-02-16T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.491724 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.514414 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.538082 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.558120 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.574110 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.586130 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.586174 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.586189 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.586210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.586222 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:03Z","lastTransitionTime":"2026-02-16T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.595400 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.614747 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.633265 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.658662 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.688636 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.688682 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.688696 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.688715 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.688727 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:03Z","lastTransitionTime":"2026-02-16T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.696181 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.714670 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.732650 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.748154 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.764080 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.783717 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.792233 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.792347 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.792374 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.792403 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.792427 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:03Z","lastTransitionTime":"2026-02-16T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.896012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.896103 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.896126 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.896157 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.896178 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:03Z","lastTransitionTime":"2026-02-16T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.999194 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.999307 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.999329 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.999359 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:03 crc kubenswrapper[4698]: I0216 00:07:03.999384 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:03Z","lastTransitionTime":"2026-02-16T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.102814 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.102871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.102884 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.102912 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.102927 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:04Z","lastTransitionTime":"2026-02-16T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.206352 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.206416 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.206430 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.206456 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.206470 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:04Z","lastTransitionTime":"2026-02-16T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.234046 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 01:37:31.429091139 +0000 UTC Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.308798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.308861 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.308877 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.308896 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.308910 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:04Z","lastTransitionTime":"2026-02-16T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.412389 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.412464 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.412488 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.412520 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.412545 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:04Z","lastTransitionTime":"2026-02-16T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.483448 4698 generic.go:334] "Generic (PLEG): container finished" podID="38c8dc67-ba64-4599-a153-2e1b9b6627b6" containerID="49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f" exitCode=0 Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.483507 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" event={"ID":"38c8dc67-ba64-4599-a153-2e1b9b6627b6","Type":"ContainerDied","Data":"49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f"} Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.505408 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.516944 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.517025 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.517056 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.517093 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.517122 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:04Z","lastTransitionTime":"2026-02-16T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.532974 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.554734 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.574418 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.590974 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.605683 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.620305 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.620349 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.620361 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.620380 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.620394 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:04Z","lastTransitionTime":"2026-02-16T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.629386 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.653894 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.672906 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.689499 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.714243 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.723901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.723933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.723944 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.723962 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.723973 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:04Z","lastTransitionTime":"2026-02-16T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.731336 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.743034 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.765951 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.791220 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.826315 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.826369 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.826382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.826401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.826412 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:04Z","lastTransitionTime":"2026-02-16T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.929815 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.929860 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.929871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.929894 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:04 crc kubenswrapper[4698]: I0216 00:07:04.929904 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:04Z","lastTransitionTime":"2026-02-16T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.033058 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.033107 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.033120 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.033143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.033159 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:05Z","lastTransitionTime":"2026-02-16T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.136307 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.136373 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.136396 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.136426 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.136447 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:05Z","lastTransitionTime":"2026-02-16T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.230948 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.230992 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:05 crc kubenswrapper[4698]: E0216 00:07:05.231189 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.231228 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:05 crc kubenswrapper[4698]: E0216 00:07:05.231363 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:05 crc kubenswrapper[4698]: E0216 00:07:05.231686 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.234439 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:14:36.629985651 +0000 UTC Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.239109 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.239453 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.239632 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.239757 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.239878 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:05Z","lastTransitionTime":"2026-02-16T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.343123 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.343525 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.343841 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.344073 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.344268 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:05Z","lastTransitionTime":"2026-02-16T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.447444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.447493 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.447506 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.447527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.447541 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:05Z","lastTransitionTime":"2026-02-16T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.495655 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerStarted","Data":"4d9a18f5067e0ed0ae8ed4b54e5f7f9fa04a74ccd98a875437a4629320247901"} Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.495898 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.495948 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.495973 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.501657 4698 generic.go:334] "Generic (PLEG): container finished" podID="38c8dc67-ba64-4599-a153-2e1b9b6627b6" containerID="b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9" exitCode=0 Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.501745 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" event={"ID":"38c8dc67-ba64-4599-a153-2e1b9b6627b6","Type":"ContainerDied","Data":"b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9"} Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.521344 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.532773 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.534027 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.542266 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.550465 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.550532 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.550546 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.550569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.550579 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:05Z","lastTransitionTime":"2026-02-16T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.569369 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9a18f5067e0ed0ae8ed4b54e5f7f9fa04a74ccd98a875437a4629320247901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.584701 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.602899 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.618415 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.630696 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.643213 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.654650 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.654691 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.654704 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.654726 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.654738 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:05Z","lastTransitionTime":"2026-02-16T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.656860 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.669829 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.693217 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.714943 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.732678 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.755725 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.757423 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.757482 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.757496 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.757521 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.757534 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:05Z","lastTransitionTime":"2026-02-16T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.771441 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.792713 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.809338 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.823383 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.837743 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.860245 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.860302 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.860319 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.860341 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.860358 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:05Z","lastTransitionTime":"2026-02-16T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.864765 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.884537 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.903463 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.921654 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.945817 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.958409 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.963537 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.963604 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.963660 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.963694 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.963727 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:05Z","lastTransitionTime":"2026-02-16T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.982112 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9a18f5067e0ed0ae8ed4b54e5f7f9fa04a74ccd98a875437a4629320247901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:05 crc kubenswrapper[4698]: I0216 00:07:05.999389 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.019658 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.037073 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.053016 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.065943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.065976 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.065987 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.066022 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.066035 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:06Z","lastTransitionTime":"2026-02-16T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.169925 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.170013 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.170040 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.170077 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.170100 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:06Z","lastTransitionTime":"2026-02-16T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.234730 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 14:58:05.991562095 +0000 UTC Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.273600 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.273718 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.273739 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.273770 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.273788 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:06Z","lastTransitionTime":"2026-02-16T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.376798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.377146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.377315 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.377496 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.377696 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:06Z","lastTransitionTime":"2026-02-16T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.480733 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.480778 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.480791 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.480809 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.480821 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:06Z","lastTransitionTime":"2026-02-16T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.509348 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" event={"ID":"38c8dc67-ba64-4599-a153-2e1b9b6627b6","Type":"ContainerStarted","Data":"fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51"} Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.530343 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.551236 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.570861 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.583955 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.584014 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.584040 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.584065 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.584085 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:06Z","lastTransitionTime":"2026-02-16T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.597870 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.622583 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.647425 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.670019 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.687347 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.687415 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.687437 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.687463 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.687484 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:06Z","lastTransitionTime":"2026-02-16T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.688607 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.721858 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.755020 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.790302 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.790353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.790364 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.790382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.790396 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:06Z","lastTransitionTime":"2026-02-16T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.796042 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9a18f5067e0ed0ae8ed4b54e5f7f9fa04a74ccd98a875437a4629320247901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.810207 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.825065 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.839114 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.853150 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.893070 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.893141 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.893160 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.893188 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.893208 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:06Z","lastTransitionTime":"2026-02-16T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.996124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.996167 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.996178 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.996196 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:06 crc kubenswrapper[4698]: I0216 00:07:06.996207 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:06Z","lastTransitionTime":"2026-02-16T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.028130 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.028330 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.028415 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:07:23.028369198 +0000 UTC m=+52.686268000 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.028504 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.028591 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.028723 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.028523 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.028731 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.028771 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.029030 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.029067 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.029084 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.029093 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.028788 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.028903 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 00:07:23.028879344 +0000 UTC m=+52.686778146 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.029298 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 00:07:23.029264767 +0000 UTC m=+52.687163569 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.029338 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 00:07:23.029326208 +0000 UTC m=+52.687225000 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.029373 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 00:07:23.02936078 +0000 UTC m=+52.687259582 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.098405 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.098479 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.098497 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.098527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.098550 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:07Z","lastTransitionTime":"2026-02-16T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.200749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.200795 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.200807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.200828 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.200839 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:07Z","lastTransitionTime":"2026-02-16T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.230809 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.230833 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.230948 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.231062 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.231373 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.231538 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.235089 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:55:04.013454201 +0000 UTC Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.303316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.303374 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.303387 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.303408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.303420 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:07Z","lastTransitionTime":"2026-02-16T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.406437 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.406542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.406574 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.406611 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.406666 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:07Z","lastTransitionTime":"2026-02-16T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.509771 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.510212 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.510227 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.510259 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.510271 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:07Z","lastTransitionTime":"2026-02-16T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.612871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.612929 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.612944 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.612966 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.612984 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:07Z","lastTransitionTime":"2026-02-16T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.716371 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.716675 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.716691 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.716743 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.716758 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:07Z","lastTransitionTime":"2026-02-16T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.819002 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.819069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.819092 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.819125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.819153 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:07Z","lastTransitionTime":"2026-02-16T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.877487 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.877546 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.877559 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.877589 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.877603 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:07Z","lastTransitionTime":"2026-02-16T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.894563 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.900171 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.900231 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.900255 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.900290 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.900313 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:07Z","lastTransitionTime":"2026-02-16T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.918241 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.922467 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.922515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.922533 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.922554 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.922572 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:07Z","lastTransitionTime":"2026-02-16T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.943086 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.947473 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.947537 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.947557 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.947581 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.947598 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:07Z","lastTransitionTime":"2026-02-16T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.969644 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.974685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.974732 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.974750 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.974775 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.974833 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:07Z","lastTransitionTime":"2026-02-16T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.997683 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:07 crc kubenswrapper[4698]: E0216 00:07:07.997836 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.999645 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.999689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.999708 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.999734 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:07 crc kubenswrapper[4698]: I0216 00:07:07.999751 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:07Z","lastTransitionTime":"2026-02-16T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.102859 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.102900 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.102915 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.102938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.102952 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:08Z","lastTransitionTime":"2026-02-16T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.206118 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.206168 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.206182 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.206200 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.206211 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:08Z","lastTransitionTime":"2026-02-16T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.235503 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:11:50.56127156 +0000 UTC Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.310752 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.310824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.310848 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.310882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.310906 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:08Z","lastTransitionTime":"2026-02-16T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.414672 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.414736 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.414754 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.414778 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.414797 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:08Z","lastTransitionTime":"2026-02-16T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.518233 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.518675 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovnkube-controller/0.log" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.518698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.518715 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.518733 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.518743 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:08Z","lastTransitionTime":"2026-02-16T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.520895 4698 generic.go:334] "Generic (PLEG): container finished" podID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerID="4d9a18f5067e0ed0ae8ed4b54e5f7f9fa04a74ccd98a875437a4629320247901" exitCode=1 Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.520943 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerDied","Data":"4d9a18f5067e0ed0ae8ed4b54e5f7f9fa04a74ccd98a875437a4629320247901"} Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.521696 4698 scope.go:117] "RemoveContainer" containerID="4d9a18f5067e0ed0ae8ed4b54e5f7f9fa04a74ccd98a875437a4629320247901" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.544345 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.560759 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.578140 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.594518 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.610152 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.621278 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.621463 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.621559 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.621684 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.621804 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:08Z","lastTransitionTime":"2026-02-16T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.627310 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.649839 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.666006 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.681114 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.705737 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9a18f5067e0ed0ae8ed4b54e5f7f9fa04a74ccd98a875437a4629320247901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9a18f5067e0ed0ae8ed4b54e5f7f9fa04a74ccd98a875437a4629320247901\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0216 00:07:07.809142 5993 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 00:07:07.809245 5993 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 00:07:07.809365 5993 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 00:07:07.809398 5993 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 00:07:07.809406 5993 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 00:07:07.809424 5993 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 00:07:07.809433 5993 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 00:07:07.809458 5993 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 00:07:07.809494 5993 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 00:07:07.809494 5993 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 00:07:07.809510 5993 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 00:07:07.809535 5993 factory.go:656] Stopping watch factory\\\\nI0216 00:07:07.809549 5993 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.725478 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.725520 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.725533 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.725551 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.725565 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:08Z","lastTransitionTime":"2026-02-16T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.730545 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.746001 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.762697 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.780596 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.798073 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.828732 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.828789 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.828804 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.828824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.828834 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:08Z","lastTransitionTime":"2026-02-16T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.932179 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.932231 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.932247 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.932266 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:08 crc kubenswrapper[4698]: I0216 00:07:08.932280 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:08Z","lastTransitionTime":"2026-02-16T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.034711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.034760 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.034776 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.034795 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.034807 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:09Z","lastTransitionTime":"2026-02-16T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.137451 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.137509 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.137521 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.137542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.137556 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:09Z","lastTransitionTime":"2026-02-16T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.231152 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.231200 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.231209 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:09 crc kubenswrapper[4698]: E0216 00:07:09.231331 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:09 crc kubenswrapper[4698]: E0216 00:07:09.231470 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:09 crc kubenswrapper[4698]: E0216 00:07:09.231652 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.236050 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:38:57.664741277 +0000 UTC Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.241027 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.241078 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.241089 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.241110 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.241124 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:09Z","lastTransitionTime":"2026-02-16T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.343771 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.343822 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.343834 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.343853 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.343866 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:09Z","lastTransitionTime":"2026-02-16T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.446327 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.446444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.446471 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.446505 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.446532 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:09Z","lastTransitionTime":"2026-02-16T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.530807 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovnkube-controller/1.log" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.531816 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovnkube-controller/0.log" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.535329 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerDied","Data":"cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e"} Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.535340 4698 generic.go:334] "Generic (PLEG): container finished" podID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerID="cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e" exitCode=1 Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.535420 4698 scope.go:117] "RemoveContainer" containerID="4d9a18f5067e0ed0ae8ed4b54e5f7f9fa04a74ccd98a875437a4629320247901" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.537061 4698 scope.go:117] "RemoveContainer" containerID="cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e" Feb 16 00:07:09 crc kubenswrapper[4698]: E0216 00:07:09.538136 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.550293 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.550361 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.550399 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.550429 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.550449 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:09Z","lastTransitionTime":"2026-02-16T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.551350 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.565761 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.578856 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.594600 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.617398 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.633308 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.646874 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.654766 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.654836 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.654851 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.654878 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.654896 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:09Z","lastTransitionTime":"2026-02-16T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.668709 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.689286 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.704121 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.724941 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9a18f5067e0ed0ae8ed4b54e5f7f9fa04a74ccd98a875437a4629320247901\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0216 00:07:07.809142 5993 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 00:07:07.809245 5993 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 00:07:07.809365 5993 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 00:07:07.809398 5993 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 00:07:07.809406 5993 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 00:07:07.809424 5993 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 00:07:07.809433 5993 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 00:07:07.809458 5993 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 00:07:07.809494 5993 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 00:07:07.809494 5993 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 00:07:07.809510 5993 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 00:07:07.809535 5993 factory.go:656] Stopping watch factory\\\\nI0216 00:07:07.809549 5993 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:09Z\\\",\\\"message\\\":\\\"ring{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/control-plane-machine-set-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/control-plane-machine-set-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.41\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 00:07:09.429205 6138 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.738549 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.753423 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.757816 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.757885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.757899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.757921 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.757934 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:09Z","lastTransitionTime":"2026-02-16T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.769980 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.790773 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.861511 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.861556 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.861569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.861592 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.861603 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:09Z","lastTransitionTime":"2026-02-16T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.964600 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.964650 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.964659 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.964676 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:09 crc kubenswrapper[4698]: I0216 00:07:09.964689 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:09Z","lastTransitionTime":"2026-02-16T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.068883 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.068946 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.068964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.068990 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.069009 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:10Z","lastTransitionTime":"2026-02-16T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.172275 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.172340 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.172354 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.172376 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.172390 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:10Z","lastTransitionTime":"2026-02-16T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.236429 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:51:08.799724641 +0000 UTC Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.275459 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.275505 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.275518 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.275534 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.275549 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:10Z","lastTransitionTime":"2026-02-16T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.378610 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.378686 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.378698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.378725 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.378736 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:10Z","lastTransitionTime":"2026-02-16T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.482032 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.482105 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.482125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.482153 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.482174 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:10Z","lastTransitionTime":"2026-02-16T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.542068 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovnkube-controller/1.log" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.585882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.585942 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.585966 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.586007 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.586030 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:10Z","lastTransitionTime":"2026-02-16T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.689681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.690028 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.690104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.690198 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.690272 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:10Z","lastTransitionTime":"2026-02-16T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.794685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.794754 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.794775 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.794798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.794813 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:10Z","lastTransitionTime":"2026-02-16T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.898647 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.898713 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.898737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.898779 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:10 crc kubenswrapper[4698]: I0216 00:07:10.898820 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:10Z","lastTransitionTime":"2026-02-16T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.002020 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.002080 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.002094 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.002121 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.002143 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:11Z","lastTransitionTime":"2026-02-16T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.105181 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.105253 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.105273 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.105300 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.105320 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:11Z","lastTransitionTime":"2026-02-16T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.208520 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.208564 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.208576 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.208592 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.208604 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:11Z","lastTransitionTime":"2026-02-16T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.248725 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:11 crc kubenswrapper[4698]: E0216 00:07:11.248995 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.249091 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:11 crc kubenswrapper[4698]: E0216 00:07:11.249308 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.249927 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.249965 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 14:24:02.804140959 +0000 UTC Feb 16 00:07:11 crc kubenswrapper[4698]: E0216 00:07:11.250064 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.261608 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt"] Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.262382 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.267894 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.270001 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.276070 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.291098 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.311573 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.311632 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.311655 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.311678 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.311690 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:11Z","lastTransitionTime":"2026-02-16T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.316960 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9a18f5067e0ed0ae8ed4b54e5f7f9fa04a74ccd98a875437a4629320247901\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0216 00:07:07.809142 5993 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 00:07:07.809245 5993 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 00:07:07.809365 5993 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 00:07:07.809398 5993 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 00:07:07.809406 5993 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 00:07:07.809424 5993 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 00:07:07.809433 5993 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 00:07:07.809458 5993 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 00:07:07.809494 5993 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 00:07:07.809494 5993 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 00:07:07.809510 5993 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 00:07:07.809535 5993 factory.go:656] Stopping watch factory\\\\nI0216 00:07:07.809549 5993 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:09Z\\\",\\\"message\\\":\\\"ring{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/control-plane-machine-set-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/control-plane-machine-set-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.41\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 00:07:09.429205 6138 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.334289 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.350463 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.365082 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.376740 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mblf5\" (UniqueName: \"kubernetes.io/projected/fca7a940-fd0a-4b48-8cdd-086dd7ef42eb-kube-api-access-mblf5\") pod \"ovnkube-control-plane-749d76644c-ckgrt\" (UID: \"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.376799 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fca7a940-fd0a-4b48-8cdd-086dd7ef42eb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ckgrt\" (UID: \"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.376830 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fca7a940-fd0a-4b48-8cdd-086dd7ef42eb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ckgrt\" (UID: \"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.376933 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fca7a940-fd0a-4b48-8cdd-086dd7ef42eb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ckgrt\" (UID: \"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.388855 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.410938 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.413852 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.413923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.413943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.413972 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.413995 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:11Z","lastTransitionTime":"2026-02-16T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.429688 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.447164 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.473121 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.477813 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fca7a940-fd0a-4b48-8cdd-086dd7ef42eb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ckgrt\" (UID: \"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.477877 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mblf5\" (UniqueName: \"kubernetes.io/projected/fca7a940-fd0a-4b48-8cdd-086dd7ef42eb-kube-api-access-mblf5\") pod \"ovnkube-control-plane-749d76644c-ckgrt\" (UID: \"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.477935 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fca7a940-fd0a-4b48-8cdd-086dd7ef42eb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ckgrt\" (UID: \"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.477966 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fca7a940-fd0a-4b48-8cdd-086dd7ef42eb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ckgrt\" (UID: \"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.478676 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fca7a940-fd0a-4b48-8cdd-086dd7ef42eb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ckgrt\" (UID: \"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.479438 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fca7a940-fd0a-4b48-8cdd-086dd7ef42eb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ckgrt\" (UID: \"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.487252 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fca7a940-fd0a-4b48-8cdd-086dd7ef42eb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ckgrt\" (UID: \"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.489888 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.511315 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.512593 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mblf5\" (UniqueName: \"kubernetes.io/projected/fca7a940-fd0a-4b48-8cdd-086dd7ef42eb-kube-api-access-mblf5\") pod \"ovnkube-control-plane-749d76644c-ckgrt\" (UID: \"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.521831 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.522161 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.522348 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.522504 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.522721 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:11Z","lastTransitionTime":"2026-02-16T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.529465 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.549369 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.567520 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.584870 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.585726 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.603448 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.626400 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.626465 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.626485 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.626514 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.626536 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:11Z","lastTransitionTime":"2026-02-16T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.626875 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.652139 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.670909 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckgrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.695498 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.711679 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.729230 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.729297 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.729318 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.729348 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.729372 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:11Z","lastTransitionTime":"2026-02-16T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.733900 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.764586 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.779784 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.799764 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9a18f5067e0ed0ae8ed4b54e5f7f9fa04a74ccd98a875437a4629320247901\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0216 00:07:07.809142 5993 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 00:07:07.809245 5993 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 00:07:07.809365 5993 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 00:07:07.809398 5993 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 00:07:07.809406 5993 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 00:07:07.809424 5993 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 00:07:07.809433 5993 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 00:07:07.809458 5993 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 00:07:07.809494 5993 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 00:07:07.809494 5993 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 00:07:07.809510 5993 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 00:07:07.809535 5993 factory.go:656] Stopping watch factory\\\\nI0216 00:07:07.809549 5993 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:09Z\\\",\\\"message\\\":\\\"ring{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/control-plane-machine-set-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/control-plane-machine-set-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.41\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 00:07:09.429205 6138 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.816493 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.830548 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.833256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.833372 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.833395 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.833416 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.833431 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:11Z","lastTransitionTime":"2026-02-16T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.849754 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.870224 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.936314 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.936385 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.936408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.936441 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:11 crc kubenswrapper[4698]: I0216 00:07:11.936464 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:11Z","lastTransitionTime":"2026-02-16T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.039090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.039145 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.039155 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.039177 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.039192 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:12Z","lastTransitionTime":"2026-02-16T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.142240 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.142319 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.142338 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.142374 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.142402 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:12Z","lastTransitionTime":"2026-02-16T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.245788 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.245842 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.245853 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.245874 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.245885 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:12Z","lastTransitionTime":"2026-02-16T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.250886 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 20:19:41.167498281 +0000 UTC Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.348812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.348871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.348889 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.348921 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.348941 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:12Z","lastTransitionTime":"2026-02-16T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.451816 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.452282 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.452406 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.452525 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.452657 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:12Z","lastTransitionTime":"2026-02-16T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.555646 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.555695 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.555709 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.555730 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.555748 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:12Z","lastTransitionTime":"2026-02-16T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.566681 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" event={"ID":"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb","Type":"ContainerStarted","Data":"b2e24a691e92de7fbfd6fad27ff1c960aa48b9a3fb8be9fd9c03065c360fe3d5"} Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.566767 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" event={"ID":"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb","Type":"ContainerStarted","Data":"f0b9f244b951d5239f9c5d76101b18f4414a38bc0474f502dd70e3dd0eed00d0"} Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.566787 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" event={"ID":"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb","Type":"ContainerStarted","Data":"3ba660a88ad79fcca7f9fc408f7dde56b530025ba3975be0f21b56f8643fe940"} Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.592302 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9a18f5067e0ed0ae8ed4b54e5f7f9fa04a74ccd98a875437a4629320247901\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0216 00:07:07.809142 5993 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 00:07:07.809245 5993 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 00:07:07.809365 5993 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 00:07:07.809398 5993 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 00:07:07.809406 5993 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 00:07:07.809424 5993 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 00:07:07.809433 5993 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 00:07:07.809458 5993 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 00:07:07.809494 5993 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 00:07:07.809494 5993 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 00:07:07.809510 5993 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 00:07:07.809535 5993 factory.go:656] Stopping watch factory\\\\nI0216 00:07:07.809549 5993 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:09Z\\\",\\\"message\\\":\\\"ring{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/control-plane-machine-set-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/control-plane-machine-set-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.41\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 00:07:09.429205 6138 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.617212 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.631159 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.646855 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.657909 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.657946 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.657961 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.657982 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.657996 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:12Z","lastTransitionTime":"2026-02-16T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.658850 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.671540 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.712100 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.730314 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.748306 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.751567 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-fgr4f"] Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.752164 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:12 crc kubenswrapper[4698]: E0216 00:07:12.752239 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.760797 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.760830 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.760841 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.760857 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.760869 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:12Z","lastTransitionTime":"2026-02-16T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.763791 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.778867 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.790608 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.804983 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.816176 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b9f244b951d5239f9c5d76101b18f4414a38bc0474f502dd70e3dd0eed00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e24a691e92de7fbfd6fad27ff1c960aa48b9a3fb8be9fd9c03065c360fe3d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckgrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.828954 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.842464 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.854064 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.863361 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.863397 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.863409 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.863427 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.863436 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:12Z","lastTransitionTime":"2026-02-16T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.864463 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.876714 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.886376 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87629f1e-d9d5-4302-a92a-f9ac3bad1707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fgr4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.893111 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfhs8\" (UniqueName: \"kubernetes.io/projected/87629f1e-d9d5-4302-a92a-f9ac3bad1707-kube-api-access-pfhs8\") pod \"network-metrics-daemon-fgr4f\" (UID: \"87629f1e-d9d5-4302-a92a-f9ac3bad1707\") " pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.893250 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs\") pod \"network-metrics-daemon-fgr4f\" (UID: \"87629f1e-d9d5-4302-a92a-f9ac3bad1707\") " pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.898277 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.911230 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.924217 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.936966 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.949248 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.965446 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.965943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.966037 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.966102 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.966170 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.966243 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:12Z","lastTransitionTime":"2026-02-16T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.978525 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b9f244b951d5239f9c5d76101b18f4414a38bc0474f502dd70e3dd0eed00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e24a691e92de7fbfd6fad27ff1c960aa48b9a3fb8be9fd9c03065c360fe3d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckgrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.994261 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfhs8\" (UniqueName: \"kubernetes.io/projected/87629f1e-d9d5-4302-a92a-f9ac3bad1707-kube-api-access-pfhs8\") pod \"network-metrics-daemon-fgr4f\" (UID: \"87629f1e-d9d5-4302-a92a-f9ac3bad1707\") " pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.994340 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs\") pod \"network-metrics-daemon-fgr4f\" (UID: \"87629f1e-d9d5-4302-a92a-f9ac3bad1707\") " pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:12 crc kubenswrapper[4698]: E0216 00:07:12.994467 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:07:12 crc kubenswrapper[4698]: E0216 00:07:12.994528 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs podName:87629f1e-d9d5-4302-a92a-f9ac3bad1707 nodeName:}" failed. No retries permitted until 2026-02-16 00:07:13.494512048 +0000 UTC m=+43.152410810 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs") pod "network-metrics-daemon-fgr4f" (UID: "87629f1e-d9d5-4302-a92a-f9ac3bad1707") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:07:12 crc kubenswrapper[4698]: I0216 00:07:12.998540 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.013742 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.015189 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfhs8\" (UniqueName: \"kubernetes.io/projected/87629f1e-d9d5-4302-a92a-f9ac3bad1707-kube-api-access-pfhs8\") pod \"network-metrics-daemon-fgr4f\" (UID: \"87629f1e-d9d5-4302-a92a-f9ac3bad1707\") " pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.028056 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.046541 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.058814 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.068858 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.068913 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.068925 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.068945 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.068960 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:13Z","lastTransitionTime":"2026-02-16T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.078349 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9a18f5067e0ed0ae8ed4b54e5f7f9fa04a74ccd98a875437a4629320247901\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0216 00:07:07.809142 5993 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 00:07:07.809245 5993 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 00:07:07.809365 5993 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 00:07:07.809398 5993 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 00:07:07.809406 5993 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 00:07:07.809424 5993 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 00:07:07.809433 5993 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 00:07:07.809458 5993 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 00:07:07.809494 5993 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 00:07:07.809494 5993 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 00:07:07.809510 5993 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 00:07:07.809535 5993 factory.go:656] Stopping watch factory\\\\nI0216 00:07:07.809549 5993 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:09Z\\\",\\\"message\\\":\\\"ring{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/control-plane-machine-set-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/control-plane-machine-set-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.41\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 00:07:09.429205 6138 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.171426 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.171479 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.171493 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.171517 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.171531 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:13Z","lastTransitionTime":"2026-02-16T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.231489 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.231580 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.231491 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:13 crc kubenswrapper[4698]: E0216 00:07:13.231694 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:13 crc kubenswrapper[4698]: E0216 00:07:13.231845 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:13 crc kubenswrapper[4698]: E0216 00:07:13.231979 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.251282 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:13:46.425843303 +0000 UTC Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.274556 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.274703 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.274737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.274770 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.274789 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:13Z","lastTransitionTime":"2026-02-16T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.378745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.378813 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.378834 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.378864 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.378884 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:13Z","lastTransitionTime":"2026-02-16T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.482194 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.482297 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.482319 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.482367 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.482411 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:13Z","lastTransitionTime":"2026-02-16T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.499990 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs\") pod \"network-metrics-daemon-fgr4f\" (UID: \"87629f1e-d9d5-4302-a92a-f9ac3bad1707\") " pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:13 crc kubenswrapper[4698]: E0216 00:07:13.500242 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:07:13 crc kubenswrapper[4698]: E0216 00:07:13.500354 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs podName:87629f1e-d9d5-4302-a92a-f9ac3bad1707 nodeName:}" failed. No retries permitted until 2026-02-16 00:07:14.500325639 +0000 UTC m=+44.158224441 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs") pod "network-metrics-daemon-fgr4f" (UID: "87629f1e-d9d5-4302-a92a-f9ac3bad1707") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.585889 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.585982 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.586002 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.586099 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.586131 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:13Z","lastTransitionTime":"2026-02-16T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.689782 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.689825 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.689839 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.689862 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.689874 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:13Z","lastTransitionTime":"2026-02-16T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.792894 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.792990 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.793019 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.793062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.793086 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:13Z","lastTransitionTime":"2026-02-16T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.895335 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.895407 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.895423 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.895450 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.895468 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:13Z","lastTransitionTime":"2026-02-16T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.998680 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.998744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.998758 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.998781 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:13 crc kubenswrapper[4698]: I0216 00:07:13.998795 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:13Z","lastTransitionTime":"2026-02-16T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.101799 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.101860 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.101879 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.101907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.101925 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:14Z","lastTransitionTime":"2026-02-16T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.205316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.205401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.205425 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.205453 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.205473 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:14Z","lastTransitionTime":"2026-02-16T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.230754 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:14 crc kubenswrapper[4698]: E0216 00:07:14.230976 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.252084 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 16:54:10.55578727 +0000 UTC Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.309258 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.309308 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.309326 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.309351 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.309368 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:14Z","lastTransitionTime":"2026-02-16T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.413010 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.413086 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.413112 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.413146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.413169 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:14Z","lastTransitionTime":"2026-02-16T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.510970 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs\") pod \"network-metrics-daemon-fgr4f\" (UID: \"87629f1e-d9d5-4302-a92a-f9ac3bad1707\") " pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:14 crc kubenswrapper[4698]: E0216 00:07:14.511182 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:07:14 crc kubenswrapper[4698]: E0216 00:07:14.511263 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs podName:87629f1e-d9d5-4302-a92a-f9ac3bad1707 nodeName:}" failed. No retries permitted until 2026-02-16 00:07:16.511244969 +0000 UTC m=+46.169143751 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs") pod "network-metrics-daemon-fgr4f" (UID: "87629f1e-d9d5-4302-a92a-f9ac3bad1707") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.516173 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.516222 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.516248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.516278 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.516300 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:14Z","lastTransitionTime":"2026-02-16T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.618242 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.618278 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.618287 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.618320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.618332 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:14Z","lastTransitionTime":"2026-02-16T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.721771 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.722229 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.722376 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.722515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.722687 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:14Z","lastTransitionTime":"2026-02-16T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.825911 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.826360 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.826515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.826727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.826888 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:14Z","lastTransitionTime":"2026-02-16T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.930679 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.930861 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.930892 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.930932 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:14 crc kubenswrapper[4698]: I0216 00:07:14.930957 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:14Z","lastTransitionTime":"2026-02-16T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.034498 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.034562 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.034581 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.034608 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.034663 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:15Z","lastTransitionTime":"2026-02-16T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.137749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.137790 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.137803 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.137822 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.137832 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:15Z","lastTransitionTime":"2026-02-16T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.230705 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.230914 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.231121 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:15 crc kubenswrapper[4698]: E0216 00:07:15.231107 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:15 crc kubenswrapper[4698]: E0216 00:07:15.231435 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:15 crc kubenswrapper[4698]: E0216 00:07:15.231548 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.240565 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.240678 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.240706 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.240738 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.240763 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:15Z","lastTransitionTime":"2026-02-16T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.252997 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:25:25.62084154 +0000 UTC Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.343708 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.343769 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.343799 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.343836 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.343864 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:15Z","lastTransitionTime":"2026-02-16T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.447332 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.447439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.447457 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.447486 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.447506 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:15Z","lastTransitionTime":"2026-02-16T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.552711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.552761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.552772 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.552788 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.552803 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:15Z","lastTransitionTime":"2026-02-16T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.656489 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.656577 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.656599 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.656661 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.656681 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:15Z","lastTransitionTime":"2026-02-16T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.759844 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.759930 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.759945 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.759967 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.759981 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:15Z","lastTransitionTime":"2026-02-16T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.862830 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.863131 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.863199 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.863261 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.863338 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:15Z","lastTransitionTime":"2026-02-16T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.967353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.967410 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.967428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.967452 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:15 crc kubenswrapper[4698]: I0216 00:07:15.967474 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:15Z","lastTransitionTime":"2026-02-16T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.071420 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.071494 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.071512 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.071541 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.071562 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:16Z","lastTransitionTime":"2026-02-16T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.175381 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.175444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.175463 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.175490 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.175509 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:16Z","lastTransitionTime":"2026-02-16T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.230795 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:16 crc kubenswrapper[4698]: E0216 00:07:16.231072 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.253396 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 21:35:05.987110336 +0000 UTC Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.279053 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.279141 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.279164 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.279195 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.279217 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:16Z","lastTransitionTime":"2026-02-16T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.383125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.383199 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.383222 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.383250 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.383276 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:16Z","lastTransitionTime":"2026-02-16T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.486289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.486330 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.486343 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.486364 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.486378 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:16Z","lastTransitionTime":"2026-02-16T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.538147 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs\") pod \"network-metrics-daemon-fgr4f\" (UID: \"87629f1e-d9d5-4302-a92a-f9ac3bad1707\") " pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:16 crc kubenswrapper[4698]: E0216 00:07:16.538416 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:07:16 crc kubenswrapper[4698]: E0216 00:07:16.538531 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs podName:87629f1e-d9d5-4302-a92a-f9ac3bad1707 nodeName:}" failed. No retries permitted until 2026-02-16 00:07:20.538503598 +0000 UTC m=+50.196402370 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs") pod "network-metrics-daemon-fgr4f" (UID: "87629f1e-d9d5-4302-a92a-f9ac3bad1707") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.589159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.589212 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.589221 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.589237 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.589247 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:16Z","lastTransitionTime":"2026-02-16T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.691791 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.691847 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.691861 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.691883 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.691897 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:16Z","lastTransitionTime":"2026-02-16T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.795275 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.795346 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.795366 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.795396 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.795417 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:16Z","lastTransitionTime":"2026-02-16T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.899156 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.899229 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.899247 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.899273 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:16 crc kubenswrapper[4698]: I0216 00:07:16.899296 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:16Z","lastTransitionTime":"2026-02-16T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.002516 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.002915 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.003013 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.003112 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.003194 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:17Z","lastTransitionTime":"2026-02-16T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.106558 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.107047 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.107254 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.107437 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.107584 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:17Z","lastTransitionTime":"2026-02-16T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.210420 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.210875 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.211022 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.211217 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.211340 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:17Z","lastTransitionTime":"2026-02-16T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.231075 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.231303 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:17 crc kubenswrapper[4698]: E0216 00:07:17.231496 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.231231 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:17 crc kubenswrapper[4698]: E0216 00:07:17.231728 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:17 crc kubenswrapper[4698]: E0216 00:07:17.232063 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.253872 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:56:39.275728134 +0000 UTC Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.315321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.315418 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.315444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.315475 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.315709 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:17Z","lastTransitionTime":"2026-02-16T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.419268 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.419333 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.419344 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.419364 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.419382 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:17Z","lastTransitionTime":"2026-02-16T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.522268 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.522364 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.522392 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.522417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.522437 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:17Z","lastTransitionTime":"2026-02-16T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.625055 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.625144 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.625169 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.625203 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.625230 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:17Z","lastTransitionTime":"2026-02-16T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.728759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.728847 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.728866 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.728903 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.728923 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:17Z","lastTransitionTime":"2026-02-16T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.832171 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.832235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.832253 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.832284 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.832307 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:17Z","lastTransitionTime":"2026-02-16T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.935953 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.936032 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.936054 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.936082 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:17 crc kubenswrapper[4698]: I0216 00:07:17.936103 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:17Z","lastTransitionTime":"2026-02-16T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.039580 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.039704 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.039724 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.039780 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.039799 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:18Z","lastTransitionTime":"2026-02-16T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.142520 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.142592 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.142645 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.142718 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.142740 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:18Z","lastTransitionTime":"2026-02-16T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.230872 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:18 crc kubenswrapper[4698]: E0216 00:07:18.231050 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.245971 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.246045 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.246062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.246084 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.246100 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:18Z","lastTransitionTime":"2026-02-16T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.253967 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 13:13:38.925868322 +0000 UTC Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.348655 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.348704 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.348715 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.348731 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.348744 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:18Z","lastTransitionTime":"2026-02-16T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.360785 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.360831 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.360844 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.360864 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.360879 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:18Z","lastTransitionTime":"2026-02-16T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:18 crc kubenswrapper[4698]: E0216 00:07:18.377881 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.381228 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.381261 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.381270 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.381283 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.381294 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:18Z","lastTransitionTime":"2026-02-16T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:18 crc kubenswrapper[4698]: E0216 00:07:18.394900 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.399479 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.399536 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.399547 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.399561 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.399572 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:18Z","lastTransitionTime":"2026-02-16T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:18 crc kubenswrapper[4698]: E0216 00:07:18.412549 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.416464 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.416508 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.416522 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.416540 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.416553 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:18Z","lastTransitionTime":"2026-02-16T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:18 crc kubenswrapper[4698]: E0216 00:07:18.434115 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.438442 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.438489 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.438505 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.438527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.438541 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:18Z","lastTransitionTime":"2026-02-16T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:18 crc kubenswrapper[4698]: E0216 00:07:18.457220 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:18 crc kubenswrapper[4698]: E0216 00:07:18.457375 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.459065 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.459140 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.459159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.459187 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.459213 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:18Z","lastTransitionTime":"2026-02-16T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.562133 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.562200 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.562215 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.562239 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.562257 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:18Z","lastTransitionTime":"2026-02-16T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.664863 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.665295 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.665495 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.665787 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.666077 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:18Z","lastTransitionTime":"2026-02-16T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.769914 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.770840 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.770982 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.771127 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.771264 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:18Z","lastTransitionTime":"2026-02-16T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.875484 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.875573 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.875591 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.875653 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.875676 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:18Z","lastTransitionTime":"2026-02-16T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.979569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.979689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.979711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.979744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:18 crc kubenswrapper[4698]: I0216 00:07:18.979767 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:18Z","lastTransitionTime":"2026-02-16T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.082489 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.082524 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.082538 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.082555 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.082568 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:19Z","lastTransitionTime":"2026-02-16T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.185081 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.185134 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.185144 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.185159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.185172 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:19Z","lastTransitionTime":"2026-02-16T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.231092 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.231161 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.231164 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:19 crc kubenswrapper[4698]: E0216 00:07:19.231313 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:19 crc kubenswrapper[4698]: E0216 00:07:19.231703 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:19 crc kubenswrapper[4698]: E0216 00:07:19.231783 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.254822 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 23:50:47.024292413 +0000 UTC Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.287716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.288090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.288346 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.288537 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.288753 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:19Z","lastTransitionTime":"2026-02-16T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.391347 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.391428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.391449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.391473 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.391492 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:19Z","lastTransitionTime":"2026-02-16T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.495246 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.495332 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.495368 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.495401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.495432 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:19Z","lastTransitionTime":"2026-02-16T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.597981 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.598071 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.598102 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.598131 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.598152 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:19Z","lastTransitionTime":"2026-02-16T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.700797 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.700837 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.700845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.700862 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.700872 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:19Z","lastTransitionTime":"2026-02-16T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.804314 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.804383 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.804403 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.804431 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.804452 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:19Z","lastTransitionTime":"2026-02-16T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.908528 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.908803 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.908832 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.908860 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:19 crc kubenswrapper[4698]: I0216 00:07:19.908879 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:19Z","lastTransitionTime":"2026-02-16T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.012911 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.012978 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.012998 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.013028 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.013047 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:20Z","lastTransitionTime":"2026-02-16T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.116528 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.117132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.117343 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.117576 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.117834 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:20Z","lastTransitionTime":"2026-02-16T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.220743 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.221125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.221264 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.221354 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.221433 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:20Z","lastTransitionTime":"2026-02-16T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.231180 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:20 crc kubenswrapper[4698]: E0216 00:07:20.231408 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.255401 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 22:56:43.199784647 +0000 UTC Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.325111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.325199 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.325221 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.325248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.325270 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:20Z","lastTransitionTime":"2026-02-16T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.429308 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.429388 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.429410 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.429436 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.429453 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:20Z","lastTransitionTime":"2026-02-16T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.532972 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.533034 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.533047 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.533064 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.533077 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:20Z","lastTransitionTime":"2026-02-16T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.590035 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs\") pod \"network-metrics-daemon-fgr4f\" (UID: \"87629f1e-d9d5-4302-a92a-f9ac3bad1707\") " pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:20 crc kubenswrapper[4698]: E0216 00:07:20.590216 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:07:20 crc kubenswrapper[4698]: E0216 00:07:20.590292 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs podName:87629f1e-d9d5-4302-a92a-f9ac3bad1707 nodeName:}" failed. No retries permitted until 2026-02-16 00:07:28.590271701 +0000 UTC m=+58.248170483 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs") pod "network-metrics-daemon-fgr4f" (UID: "87629f1e-d9d5-4302-a92a-f9ac3bad1707") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.636678 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.636732 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.636746 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.636763 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.636775 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:20Z","lastTransitionTime":"2026-02-16T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.740152 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.740252 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.740275 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.740311 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.740341 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:20Z","lastTransitionTime":"2026-02-16T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.844156 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.844248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.844279 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.844316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.844343 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:20Z","lastTransitionTime":"2026-02-16T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.947761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.947810 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.947821 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.947841 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:20 crc kubenswrapper[4698]: I0216 00:07:20.947855 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:20Z","lastTransitionTime":"2026-02-16T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.051979 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.052049 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.052070 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.052113 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.052131 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:21Z","lastTransitionTime":"2026-02-16T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.155341 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.155811 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.155961 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.156176 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.156375 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:21Z","lastTransitionTime":"2026-02-16T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.231030 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.231102 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.231230 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:21 crc kubenswrapper[4698]: E0216 00:07:21.231543 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:21 crc kubenswrapper[4698]: E0216 00:07:21.231723 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:21 crc kubenswrapper[4698]: E0216 00:07:21.231884 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.232420 4698 scope.go:117] "RemoveContainer" containerID="cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.249952 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.256679 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 07:21:47.382621299 +0000 UTC Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.259132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.259171 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.259187 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.259204 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.259216 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:21Z","lastTransitionTime":"2026-02-16T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.266283 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.280526 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.295202 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.307672 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87629f1e-d9d5-4302-a92a-f9ac3bad1707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fgr4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.330012 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.356769 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.361513 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.361580 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.361595 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.361638 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.361654 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:21Z","lastTransitionTime":"2026-02-16T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.373778 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.390845 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.411785 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.429641 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.443517 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.460443 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.464705 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.464945 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.464955 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.464972 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.464984 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:21Z","lastTransitionTime":"2026-02-16T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.477319 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b9f244b951d5239f9c5d76101b18f4414a38bc0474f502dd70e3dd0eed00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e24a691e92de7fbfd6fad27ff1c960aa48b9a3fb8be9fd9c03065c360fe3d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckgrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.501325 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.512455 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.528847 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9a18f5067e0ed0ae8ed4b54e5f7f9fa04a74ccd98a875437a4629320247901\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:07Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0216 00:07:07.809142 5993 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 00:07:07.809245 5993 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 00:07:07.809365 5993 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 00:07:07.809398 5993 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 00:07:07.809406 5993 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 00:07:07.809424 5993 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 00:07:07.809433 5993 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 00:07:07.809458 5993 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 00:07:07.809494 5993 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 00:07:07.809494 5993 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 00:07:07.809510 5993 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 00:07:07.809535 5993 factory.go:656] Stopping watch factory\\\\nI0216 00:07:07.809549 5993 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:09Z\\\",\\\"message\\\":\\\"ring{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/control-plane-machine-set-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/control-plane-machine-set-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.41\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 00:07:09.429205 6138 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.541306 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b9f244b951d5239f9c5d76101b18f4414a38bc0474f502dd70e3dd0eed00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e24a691e92de7fbfd6fad27ff1c960aa48b9a3fb8be9fd9c03065c360fe3d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckgrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.556336 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.566996 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.567037 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.567050 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.567070 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.567084 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:21Z","lastTransitionTime":"2026-02-16T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.569355 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.584227 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.599211 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.603097 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovnkube-controller/1.log" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.606847 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerStarted","Data":"42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a"} Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.607470 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.615299 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.646497 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.660949 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.671610 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.671695 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.671709 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.671729 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.671762 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:21Z","lastTransitionTime":"2026-02-16T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.690538 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:09Z\\\",\\\"message\\\":\\\"ring{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/control-plane-machine-set-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/control-plane-machine-set-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.41\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 00:07:09.429205 6138 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.707800 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.721174 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87629f1e-d9d5-4302-a92a-f9ac3bad1707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fgr4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.735499 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.750695 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.766402 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.775081 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.775156 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.775172 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.775257 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.775338 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:21Z","lastTransitionTime":"2026-02-16T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.787189 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.802144 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.815667 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.834604 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.849152 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.870449 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.878403 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.878443 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.878452 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.878468 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.878479 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:21Z","lastTransitionTime":"2026-02-16T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.885725 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.908781 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.964547 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.982154 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.982215 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.982226 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.982244 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.982641 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:21Z","lastTransitionTime":"2026-02-16T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:21 crc kubenswrapper[4698]: I0216 00:07:21.984239 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.001060 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b9f244b951d5239f9c5d76101b18f4414a38bc0474f502dd70e3dd0eed00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e24a691e92de7fbfd6fad27ff1c960aa48b9a3fb8be9fd9c03065c360fe3d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckgrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.017419 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.029602 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.047364 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:09Z\\\",\\\"message\\\":\\\"ring{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/control-plane-machine-set-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/control-plane-machine-set-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.41\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 00:07:09.429205 6138 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.069210 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.084115 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.085337 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.085364 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.085377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.085394 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.085406 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:22Z","lastTransitionTime":"2026-02-16T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.098955 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.112506 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.136656 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.152923 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87629f1e-d9d5-4302-a92a-f9ac3bad1707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fgr4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.188117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.188179 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.188196 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.188218 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.188231 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:22Z","lastTransitionTime":"2026-02-16T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.230861 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:22 crc kubenswrapper[4698]: E0216 00:07:22.230995 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.257444 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 15:18:22.418441939 +0000 UTC Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.290691 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.290738 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.290748 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.290764 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.290774 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:22Z","lastTransitionTime":"2026-02-16T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.393581 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.393697 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.393722 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.393943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.393961 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:22Z","lastTransitionTime":"2026-02-16T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.497279 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.497373 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.497394 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.497420 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.497442 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:22Z","lastTransitionTime":"2026-02-16T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.600270 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.600315 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.600326 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.600343 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.600355 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:22Z","lastTransitionTime":"2026-02-16T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.613027 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovnkube-controller/2.log" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.613678 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovnkube-controller/1.log" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.617358 4698 generic.go:334] "Generic (PLEG): container finished" podID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerID="42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a" exitCode=1 Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.617451 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerDied","Data":"42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a"} Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.617552 4698 scope.go:117] "RemoveContainer" containerID="cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.620104 4698 scope.go:117] "RemoveContainer" containerID="42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a" Feb 16 00:07:22 crc kubenswrapper[4698]: E0216 00:07:22.620320 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.637248 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.652295 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.665442 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.687879 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.704698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.704767 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.704780 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.704802 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.704816 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:22Z","lastTransitionTime":"2026-02-16T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.706946 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.726778 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b9f244b951d5239f9c5d76101b18f4414a38bc0474f502dd70e3dd0eed00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e24a691e92de7fbfd6fad27ff1c960aa48b9a3fb8be9fd9c03065c360fe3d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckgrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.753319 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.773664 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.792131 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.807012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.807235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.807354 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.807456 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.807559 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:22Z","lastTransitionTime":"2026-02-16T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.820401 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.832167 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.864368 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbc9a3a57235dfecc794daadfcbf515dcbb5f29c5bac7f54bc6879f6539edc9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:09Z\\\",\\\"message\\\":\\\"ring{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/control-plane-machine-set-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/control-plane-machine-set-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.41\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 00:07:09.429205 6138 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:22Z\\\",\\\"message\\\":\\\"]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 00:07:22.376280 6347 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.881931 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.895390 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.910450 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.910496 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.910507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.910522 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.910534 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:22Z","lastTransitionTime":"2026-02-16T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.916005 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.932778 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87629f1e-d9d5-4302-a92a-f9ac3bad1707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fgr4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:22 crc kubenswrapper[4698]: I0216 00:07:22.949877 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.013743 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.013801 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.013814 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.013832 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.013847 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:23Z","lastTransitionTime":"2026-02-16T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.120736 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:07:23 crc kubenswrapper[4698]: E0216 00:07:23.120975 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:07:55.120931316 +0000 UTC m=+84.778830108 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.121356 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.121453 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.121508 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.121567 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:23 crc kubenswrapper[4698]: E0216 00:07:23.121604 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:07:23 crc kubenswrapper[4698]: E0216 00:07:23.121771 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:07:23 crc kubenswrapper[4698]: E0216 00:07:23.121797 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 00:07:55.121758571 +0000 UTC m=+84.779657353 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:07:23 crc kubenswrapper[4698]: E0216 00:07:23.121841 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 00:07:55.121826043 +0000 UTC m=+84.779724845 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:07:23 crc kubenswrapper[4698]: E0216 00:07:23.121935 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:07:23 crc kubenswrapper[4698]: E0216 00:07:23.121962 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:07:23 crc kubenswrapper[4698]: E0216 00:07:23.121981 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:07:23 crc kubenswrapper[4698]: E0216 00:07:23.122025 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 00:07:55.122012149 +0000 UTC m=+84.779910941 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:07:23 crc kubenswrapper[4698]: E0216 00:07:23.122258 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:07:23 crc kubenswrapper[4698]: E0216 00:07:23.122357 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:07:23 crc kubenswrapper[4698]: E0216 00:07:23.122432 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:07:23 crc kubenswrapper[4698]: E0216 00:07:23.122558 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 00:07:55.122534485 +0000 UTC m=+84.780433247 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.130711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.130830 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.130917 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.131022 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.131104 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:23Z","lastTransitionTime":"2026-02-16T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.231473 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.231522 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:23 crc kubenswrapper[4698]: E0216 00:07:23.231701 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:23 crc kubenswrapper[4698]: E0216 00:07:23.231803 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.232339 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:23 crc kubenswrapper[4698]: E0216 00:07:23.232484 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.233898 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.233931 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.233943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.233963 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.233977 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:23Z","lastTransitionTime":"2026-02-16T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.258356 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 06:45:31.911274455 +0000 UTC Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.337421 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.337485 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.337503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.337530 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.337550 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:23Z","lastTransitionTime":"2026-02-16T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.441754 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.441824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.441844 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.441871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.441889 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:23Z","lastTransitionTime":"2026-02-16T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.545055 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.545113 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.545128 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.545148 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.545162 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:23Z","lastTransitionTime":"2026-02-16T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.624884 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovnkube-controller/2.log" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.630826 4698 scope.go:117] "RemoveContainer" containerID="42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a" Feb 16 00:07:23 crc kubenswrapper[4698]: E0216 00:07:23.631097 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.648169 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.648211 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.648223 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.648239 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.648251 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:23Z","lastTransitionTime":"2026-02-16T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.648858 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.669314 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.689293 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.710432 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.735592 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.752055 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.752437 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.752747 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.752990 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.753367 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:23Z","lastTransitionTime":"2026-02-16T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.759904 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.777093 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b9f244b951d5239f9c5d76101b18f4414a38bc0474f502dd70e3dd0eed00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e24a691e92de7fbfd6fad27ff1c960aa48b9a3fb8be9fd9c03065c360fe3d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckgrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.801115 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.820770 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.849650 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:22Z\\\",\\\"message\\\":\\\"]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 00:07:22.376280 6347 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.858232 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.858327 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.858356 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.858392 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.858418 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:23Z","lastTransitionTime":"2026-02-16T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.888747 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.904381 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.927227 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.943603 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.960599 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.962745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.962801 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.962815 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.962837 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.962852 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:23Z","lastTransitionTime":"2026-02-16T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.980503 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:23 crc kubenswrapper[4698]: I0216 00:07:23.993310 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87629f1e-d9d5-4302-a92a-f9ac3bad1707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fgr4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.067077 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.067140 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.067161 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.067187 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.067205 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:24Z","lastTransitionTime":"2026-02-16T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.169852 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.170213 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.170341 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.170538 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.170716 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:24Z","lastTransitionTime":"2026-02-16T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.231478 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:24 crc kubenswrapper[4698]: E0216 00:07:24.231669 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.259561 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:07:45.775246094 +0000 UTC Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.275142 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.275537 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.275720 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.275866 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.276030 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:24Z","lastTransitionTime":"2026-02-16T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.379535 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.379603 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.379639 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.379660 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.379672 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:24Z","lastTransitionTime":"2026-02-16T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.403781 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.420868 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.440357 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.456961 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.483432 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.483497 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.483517 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.483550 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.483569 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:24Z","lastTransitionTime":"2026-02-16T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.489744 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:22Z\\\",\\\"message\\\":\\\"]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 00:07:22.376280 6347 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.511753 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.525574 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87629f1e-d9d5-4302-a92a-f9ac3bad1707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fgr4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.541722 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.558835 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.575211 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.587088 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.587418 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.587604 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.587951 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.588428 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:24Z","lastTransitionTime":"2026-02-16T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.592819 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.607536 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.621416 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.635850 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b9f244b951d5239f9c5d76101b18f4414a38bc0474f502dd70e3dd0eed00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e24a691e92de7fbfd6fad27ff1c960aa48b9a3fb8be9fd9c03065c360fe3d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckgrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.653407 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.673171 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.692052 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.692088 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.692103 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.692125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.692141 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:24Z","lastTransitionTime":"2026-02-16T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.693043 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.711038 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.734134 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.795721 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.795799 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.795818 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.795846 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.795868 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:24Z","lastTransitionTime":"2026-02-16T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.899391 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.899509 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.899537 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.899569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:24 crc kubenswrapper[4698]: I0216 00:07:24.899592 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:24Z","lastTransitionTime":"2026-02-16T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.003571 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.003667 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.003681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.003700 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.003714 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:25Z","lastTransitionTime":"2026-02-16T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.107439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.108034 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.108244 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.108431 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.108659 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:25Z","lastTransitionTime":"2026-02-16T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.212969 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.213028 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.213041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.213063 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.213077 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:25Z","lastTransitionTime":"2026-02-16T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.231943 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.232074 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:25 crc kubenswrapper[4698]: E0216 00:07:25.232136 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.232187 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:25 crc kubenswrapper[4698]: E0216 00:07:25.232349 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:25 crc kubenswrapper[4698]: E0216 00:07:25.233263 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.260276 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 13:03:16.987941556 +0000 UTC Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.316790 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.316884 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.316906 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.316937 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.316965 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:25Z","lastTransitionTime":"2026-02-16T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.420552 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.420707 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.420734 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.420767 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.420790 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:25Z","lastTransitionTime":"2026-02-16T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.523923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.523991 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.524005 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.524026 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.524040 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:25Z","lastTransitionTime":"2026-02-16T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.627132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.627575 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.627806 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.627977 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.628137 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:25Z","lastTransitionTime":"2026-02-16T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.731734 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.731793 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.731807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.731829 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.731844 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:25Z","lastTransitionTime":"2026-02-16T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.834981 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.835055 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.835074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.835103 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.835125 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:25Z","lastTransitionTime":"2026-02-16T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.937839 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.937911 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.937930 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.937955 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:25 crc kubenswrapper[4698]: I0216 00:07:25.937974 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:25Z","lastTransitionTime":"2026-02-16T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.041941 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.041996 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.042008 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.042031 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.042045 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:26Z","lastTransitionTime":"2026-02-16T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.145345 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.145457 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.145480 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.145502 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.145539 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:26Z","lastTransitionTime":"2026-02-16T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.230925 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:26 crc kubenswrapper[4698]: E0216 00:07:26.231117 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.249073 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.249173 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.249195 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.249224 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.249244 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:26Z","lastTransitionTime":"2026-02-16T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.261375 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 16:13:01.245081726 +0000 UTC Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.353396 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.353735 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.353862 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.353962 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.354046 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:26Z","lastTransitionTime":"2026-02-16T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.456744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.456783 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.456794 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.456809 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.456819 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:26Z","lastTransitionTime":"2026-02-16T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.559342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.559382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.559392 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.559407 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.559418 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:26Z","lastTransitionTime":"2026-02-16T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.662179 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.662228 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.662240 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.662257 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.662270 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:26Z","lastTransitionTime":"2026-02-16T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.765322 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.765381 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.765400 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.765425 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.765443 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:26Z","lastTransitionTime":"2026-02-16T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.867940 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.868015 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.868033 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.868060 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.868081 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:26Z","lastTransitionTime":"2026-02-16T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.970823 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.970892 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.970915 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.970946 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:26 crc kubenswrapper[4698]: I0216 00:07:26.970966 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:26Z","lastTransitionTime":"2026-02-16T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.073914 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.073968 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.073986 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.074011 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.074029 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:27Z","lastTransitionTime":"2026-02-16T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.177165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.177217 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.177230 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.177248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.177263 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:27Z","lastTransitionTime":"2026-02-16T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.231700 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.231772 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.231852 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:27 crc kubenswrapper[4698]: E0216 00:07:27.231954 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:27 crc kubenswrapper[4698]: E0216 00:07:27.232135 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:27 crc kubenswrapper[4698]: E0216 00:07:27.232239 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.261972 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:55:29.872210475 +0000 UTC Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.280077 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.280145 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.280170 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.280200 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.280227 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:27Z","lastTransitionTime":"2026-02-16T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.383715 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.383844 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.383907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.383936 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.383955 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:27Z","lastTransitionTime":"2026-02-16T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.486794 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.487165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.487240 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.487321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.487393 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:27Z","lastTransitionTime":"2026-02-16T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.593562 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.593682 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.593706 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.593749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.593782 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:27Z","lastTransitionTime":"2026-02-16T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.696908 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.697043 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.697132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.697162 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.697182 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:27Z","lastTransitionTime":"2026-02-16T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.801558 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.802001 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.802144 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.802290 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.802418 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:27Z","lastTransitionTime":"2026-02-16T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.906000 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.906053 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.906066 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.906087 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:27 crc kubenswrapper[4698]: I0216 00:07:27.906102 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:27Z","lastTransitionTime":"2026-02-16T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.010467 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.011460 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.011653 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.011801 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.011949 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:28Z","lastTransitionTime":"2026-02-16T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.114944 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.115018 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.115041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.115068 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.115088 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:28Z","lastTransitionTime":"2026-02-16T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.218527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.218563 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.218573 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.218609 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.218642 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:28Z","lastTransitionTime":"2026-02-16T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.231403 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:28 crc kubenswrapper[4698]: E0216 00:07:28.231547 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.262341 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 08:07:34.54896974 +0000 UTC Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.322917 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.322983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.323006 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.323035 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.323058 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:28Z","lastTransitionTime":"2026-02-16T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.425838 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.425885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.425894 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.425909 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.425919 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:28Z","lastTransitionTime":"2026-02-16T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.494760 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.495231 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.495440 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.495726 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.495903 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:28Z","lastTransitionTime":"2026-02-16T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:28 crc kubenswrapper[4698]: E0216 00:07:28.516556 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.522214 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.522290 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.522307 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.522334 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.522351 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:28Z","lastTransitionTime":"2026-02-16T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:28 crc kubenswrapper[4698]: E0216 00:07:28.539707 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.545838 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.546204 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.546360 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.546525 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.546705 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:28Z","lastTransitionTime":"2026-02-16T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:28 crc kubenswrapper[4698]: E0216 00:07:28.577324 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.597651 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.597714 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.597734 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.597762 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.597791 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:28Z","lastTransitionTime":"2026-02-16T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:28 crc kubenswrapper[4698]: E0216 00:07:28.631160 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.637070 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.637129 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.637146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.637168 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.637203 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:28Z","lastTransitionTime":"2026-02-16T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:28 crc kubenswrapper[4698]: E0216 00:07:28.651151 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:28 crc kubenswrapper[4698]: E0216 00:07:28.651310 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.653112 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.653144 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.653155 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.653184 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.653198 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:28Z","lastTransitionTime":"2026-02-16T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.688041 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs\") pod \"network-metrics-daemon-fgr4f\" (UID: \"87629f1e-d9d5-4302-a92a-f9ac3bad1707\") " pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:28 crc kubenswrapper[4698]: E0216 00:07:28.688220 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:07:28 crc kubenswrapper[4698]: E0216 00:07:28.688300 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs podName:87629f1e-d9d5-4302-a92a-f9ac3bad1707 nodeName:}" failed. No retries permitted until 2026-02-16 00:07:44.688276121 +0000 UTC m=+74.346174883 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs") pod "network-metrics-daemon-fgr4f" (UID: "87629f1e-d9d5-4302-a92a-f9ac3bad1707") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.755900 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.755950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.755960 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.755978 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.755990 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:28Z","lastTransitionTime":"2026-02-16T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.858547 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.858610 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.858656 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.858695 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.858713 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:28Z","lastTransitionTime":"2026-02-16T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.960949 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.961023 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.961047 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.961079 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:28 crc kubenswrapper[4698]: I0216 00:07:28.961102 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:28Z","lastTransitionTime":"2026-02-16T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.064532 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.065403 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.065443 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.065484 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.065503 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:29Z","lastTransitionTime":"2026-02-16T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.169452 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.169541 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.169553 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.169572 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.169582 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:29Z","lastTransitionTime":"2026-02-16T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.231024 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.231030 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.231227 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:29 crc kubenswrapper[4698]: E0216 00:07:29.231417 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:29 crc kubenswrapper[4698]: E0216 00:07:29.232414 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:29 crc kubenswrapper[4698]: E0216 00:07:29.232963 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.263389 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:28:49.492774682 +0000 UTC Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.273169 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.273232 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.273252 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.273278 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.273304 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:29Z","lastTransitionTime":"2026-02-16T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.376303 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.376376 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.376400 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.376643 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.376670 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:29Z","lastTransitionTime":"2026-02-16T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.479879 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.480442 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.480756 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.480961 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.481157 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:29Z","lastTransitionTime":"2026-02-16T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.585309 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.585371 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.585388 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.585413 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.585434 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:29Z","lastTransitionTime":"2026-02-16T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.688988 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.689047 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.689069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.689099 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.689121 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:29Z","lastTransitionTime":"2026-02-16T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.792974 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.793086 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.793110 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.793176 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.793202 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:29Z","lastTransitionTime":"2026-02-16T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.897519 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.897584 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.897602 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.897680 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:29 crc kubenswrapper[4698]: I0216 00:07:29.897705 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:29Z","lastTransitionTime":"2026-02-16T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.000870 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.000946 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.000965 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.000998 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.001016 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:30Z","lastTransitionTime":"2026-02-16T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.103713 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.103786 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.103806 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.103834 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.103857 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:30Z","lastTransitionTime":"2026-02-16T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.207672 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.207778 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.207796 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.207823 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.207839 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:30Z","lastTransitionTime":"2026-02-16T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.231583 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:30 crc kubenswrapper[4698]: E0216 00:07:30.231907 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.264159 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:11:48.613586508 +0000 UTC Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.311241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.311319 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.311345 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.311376 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.311399 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:30Z","lastTransitionTime":"2026-02-16T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.415133 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.415196 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.415222 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.415253 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.415278 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:30Z","lastTransitionTime":"2026-02-16T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.518904 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.518971 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.518994 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.519030 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.519057 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:30Z","lastTransitionTime":"2026-02-16T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.622401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.622486 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.622582 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.622999 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.623054 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:30Z","lastTransitionTime":"2026-02-16T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.727186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.727254 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.727286 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.727316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.727342 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:30Z","lastTransitionTime":"2026-02-16T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.831668 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.831720 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.831728 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.831745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.831756 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:30Z","lastTransitionTime":"2026-02-16T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.935527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.935592 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.935644 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.935677 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:30 crc kubenswrapper[4698]: I0216 00:07:30.935712 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:30Z","lastTransitionTime":"2026-02-16T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.038755 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.038834 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.038859 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.038887 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.038906 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:31Z","lastTransitionTime":"2026-02-16T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.142693 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.142783 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.142802 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.142828 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.142847 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:31Z","lastTransitionTime":"2026-02-16T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.230838 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.231067 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:31 crc kubenswrapper[4698]: E0216 00:07:31.231777 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:31 crc kubenswrapper[4698]: E0216 00:07:31.231925 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.231112 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:31 crc kubenswrapper[4698]: E0216 00:07:31.232168 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.245867 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.245928 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.245945 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.245969 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.245988 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:31Z","lastTransitionTime":"2026-02-16T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.265068 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 02:30:48.311368148 +0000 UTC Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.268171 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.284656 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.314609 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:22Z\\\",\\\"message\\\":\\\"]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 00:07:22.376280 6347 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.333158 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.348709 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.348761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.348779 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.348803 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.348824 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:31Z","lastTransitionTime":"2026-02-16T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.357117 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.376856 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87629f1e-d9d5-4302-a92a-f9ac3bad1707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fgr4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.403927 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a39b1-d242-40d4-8179-cf7b71aaeab8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60a206ea8b608682c6898dd9051903dcdf19e39c22d1adba760b43177b474a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2915589b18e0db91214ca20d06f488bbc04f6f8a83bd4ccaaf294f99fc4aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcedd751c83fbf60506332eb79ff3d8e7bfd67099c0bcf36b1acfff96b35bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.426558 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.447399 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.452326 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.452414 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.452437 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.452466 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.452485 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:31Z","lastTransitionTime":"2026-02-16T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.468734 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.484197 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.497272 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.517843 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.534749 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b9f244b951d5239f9c5d76101b18f4414a38bc0474f502dd70e3dd0eed00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e24a691e92de7fbfd6fad27ff1c960aa48b9a3fb8be9fd9c03065c360fe3d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckgrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.552111 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.556091 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.556167 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.556185 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.556216 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.556234 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:31Z","lastTransitionTime":"2026-02-16T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.569366 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.590325 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.608654 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.659176 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.659275 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.659289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.659309 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.659325 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:31Z","lastTransitionTime":"2026-02-16T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.762664 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.762733 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.762757 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.762792 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.762816 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:31Z","lastTransitionTime":"2026-02-16T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.867019 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.867081 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.867099 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.867124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.867142 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:31Z","lastTransitionTime":"2026-02-16T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.969667 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.969721 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.969739 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.969763 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:31 crc kubenswrapper[4698]: I0216 00:07:31.969781 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:31Z","lastTransitionTime":"2026-02-16T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.073650 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.073699 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.073716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.073745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.073770 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:32Z","lastTransitionTime":"2026-02-16T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.177066 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.177121 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.177139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.177162 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.177178 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:32Z","lastTransitionTime":"2026-02-16T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.231013 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:32 crc kubenswrapper[4698]: E0216 00:07:32.231216 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.266216 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 11:30:04.844402236 +0000 UTC Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.280494 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.280537 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.280557 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.280582 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.280600 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:32Z","lastTransitionTime":"2026-02-16T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.384656 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.384812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.384845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.384875 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.384899 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:32Z","lastTransitionTime":"2026-02-16T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.488612 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.488702 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.488720 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.488746 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.488764 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:32Z","lastTransitionTime":"2026-02-16T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.591863 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.591959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.591981 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.592012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.592034 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:32Z","lastTransitionTime":"2026-02-16T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.695470 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.695548 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.695567 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.695596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.695657 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:32Z","lastTransitionTime":"2026-02-16T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.799992 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.800064 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.800086 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.800116 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.800135 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:32Z","lastTransitionTime":"2026-02-16T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.903386 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.903446 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.903460 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.903478 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:32 crc kubenswrapper[4698]: I0216 00:07:32.903492 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:32Z","lastTransitionTime":"2026-02-16T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.006975 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.007053 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.007076 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.007109 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.007132 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:33Z","lastTransitionTime":"2026-02-16T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.110056 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.110102 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.110118 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.110139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.110154 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:33Z","lastTransitionTime":"2026-02-16T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.213140 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.213200 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.213220 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.213244 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.213263 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:33Z","lastTransitionTime":"2026-02-16T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.230992 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.231056 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.231188 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:33 crc kubenswrapper[4698]: E0216 00:07:33.231224 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:33 crc kubenswrapper[4698]: E0216 00:07:33.231343 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:33 crc kubenswrapper[4698]: E0216 00:07:33.231483 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.266671 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:41:44.252662998 +0000 UTC Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.316887 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.316959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.316978 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.317003 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.317022 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:33Z","lastTransitionTime":"2026-02-16T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.419699 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.420261 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.420422 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.420568 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.420759 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:33Z","lastTransitionTime":"2026-02-16T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.524079 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.524174 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.524195 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.524223 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.524243 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:33Z","lastTransitionTime":"2026-02-16T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.628215 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.628278 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.628296 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.628320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.628339 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:33Z","lastTransitionTime":"2026-02-16T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.732151 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.732200 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.732213 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.732231 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.732243 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:33Z","lastTransitionTime":"2026-02-16T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.835663 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.835790 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.836105 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.850068 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.850101 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:33Z","lastTransitionTime":"2026-02-16T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.954109 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.954168 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.954189 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.954212 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:33 crc kubenswrapper[4698]: I0216 00:07:33.954230 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:33Z","lastTransitionTime":"2026-02-16T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.057100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.057153 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.057172 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.057196 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.057215 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:34Z","lastTransitionTime":"2026-02-16T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.160455 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.160500 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.160517 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.160538 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.160555 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:34Z","lastTransitionTime":"2026-02-16T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.231688 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:34 crc kubenswrapper[4698]: E0216 00:07:34.231863 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.263291 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.263361 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.263428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.263454 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.263475 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:34Z","lastTransitionTime":"2026-02-16T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.267538 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:33:20.351376407 +0000 UTC Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.366785 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.366841 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.366858 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.366883 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.366900 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:34Z","lastTransitionTime":"2026-02-16T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.470612 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.470707 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.470731 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.470761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.470780 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:34Z","lastTransitionTime":"2026-02-16T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.574329 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.574394 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.574413 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.574439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.574460 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:34Z","lastTransitionTime":"2026-02-16T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.677827 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.678226 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.678364 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.678525 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.678679 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:34Z","lastTransitionTime":"2026-02-16T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.782173 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.782270 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.782297 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.782332 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.782356 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:34Z","lastTransitionTime":"2026-02-16T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.886101 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.886494 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.886684 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.886830 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.886985 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:34Z","lastTransitionTime":"2026-02-16T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.991660 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.991740 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.991761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.991788 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:34 crc kubenswrapper[4698]: I0216 00:07:34.991806 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:34Z","lastTransitionTime":"2026-02-16T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.095196 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.095250 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.095269 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.095296 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.095316 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:35Z","lastTransitionTime":"2026-02-16T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.199110 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.199164 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.199176 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.199200 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.199217 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:35Z","lastTransitionTime":"2026-02-16T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.230855 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.230983 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.231024 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:35 crc kubenswrapper[4698]: E0216 00:07:35.231418 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:35 crc kubenswrapper[4698]: E0216 00:07:35.231438 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:35 crc kubenswrapper[4698]: E0216 00:07:35.231485 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.267802 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 15:44:56.603944825 +0000 UTC Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.302685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.302743 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.302762 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.302821 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.302841 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:35Z","lastTransitionTime":"2026-02-16T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.406157 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.406210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.406226 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.406243 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.406256 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:35Z","lastTransitionTime":"2026-02-16T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.508956 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.509384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.509675 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.509877 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.510060 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:35Z","lastTransitionTime":"2026-02-16T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.612934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.613241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.613342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.613452 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.613561 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:35Z","lastTransitionTime":"2026-02-16T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.717878 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.717952 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.717974 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.718000 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.718018 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:35Z","lastTransitionTime":"2026-02-16T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.821537 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.821642 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.821662 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.821690 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.821709 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:35Z","lastTransitionTime":"2026-02-16T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.925741 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.925796 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.925813 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.925838 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:35 crc kubenswrapper[4698]: I0216 00:07:35.925861 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:35Z","lastTransitionTime":"2026-02-16T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.028370 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.028426 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.028437 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.028455 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.028465 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:36Z","lastTransitionTime":"2026-02-16T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.131533 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.131661 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.131688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.131721 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.131742 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:36Z","lastTransitionTime":"2026-02-16T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.230800 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:36 crc kubenswrapper[4698]: E0216 00:07:36.231788 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.237063 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.237146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.237171 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.237205 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.237236 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:36Z","lastTransitionTime":"2026-02-16T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.268500 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 03:31:54.800498735 +0000 UTC Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.345756 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.346266 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.346414 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.346561 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.346735 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:36Z","lastTransitionTime":"2026-02-16T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.450070 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.450517 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.450688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.450886 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.451026 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:36Z","lastTransitionTime":"2026-02-16T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.554170 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.554207 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.554217 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.554232 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.554244 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:36Z","lastTransitionTime":"2026-02-16T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.657860 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.657902 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.657916 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.657934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.657948 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:36Z","lastTransitionTime":"2026-02-16T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.760371 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.760442 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.760507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.760542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.760572 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:36Z","lastTransitionTime":"2026-02-16T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.862701 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.862768 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.862787 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.862813 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.862834 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:36Z","lastTransitionTime":"2026-02-16T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.966075 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.966151 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.966172 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.966204 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:36 crc kubenswrapper[4698]: I0216 00:07:36.966223 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:36Z","lastTransitionTime":"2026-02-16T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.069550 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.069661 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.069675 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.069696 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.069711 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:37Z","lastTransitionTime":"2026-02-16T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.172337 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.172373 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.172405 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.172421 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.172431 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:37Z","lastTransitionTime":"2026-02-16T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.231735 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.231799 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:37 crc kubenswrapper[4698]: E0216 00:07:37.231912 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.231806 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:37 crc kubenswrapper[4698]: E0216 00:07:37.232136 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:37 crc kubenswrapper[4698]: E0216 00:07:37.232419 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.233242 4698 scope.go:117] "RemoveContainer" containerID="42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a" Feb 16 00:07:37 crc kubenswrapper[4698]: E0216 00:07:37.233436 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.270451 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 07:02:02.619860416 +0000 UTC Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.275738 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.275785 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.275800 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.275823 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.275848 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:37Z","lastTransitionTime":"2026-02-16T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.378983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.379050 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.379070 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.379094 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.379113 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:37Z","lastTransitionTime":"2026-02-16T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.483266 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.484118 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.484264 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.484360 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.484442 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:37Z","lastTransitionTime":"2026-02-16T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.587567 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.587640 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.587656 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.587673 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.587685 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:37Z","lastTransitionTime":"2026-02-16T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.690194 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.690249 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.690262 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.690281 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.690295 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:37Z","lastTransitionTime":"2026-02-16T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.796238 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.796291 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.796303 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.796322 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.796336 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:37Z","lastTransitionTime":"2026-02-16T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.899112 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.899162 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.899177 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.899198 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:37 crc kubenswrapper[4698]: I0216 00:07:37.899212 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:37Z","lastTransitionTime":"2026-02-16T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.002031 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.002114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.002150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.002177 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.002191 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:38Z","lastTransitionTime":"2026-02-16T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.104827 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.104887 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.104911 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.104940 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.104961 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:38Z","lastTransitionTime":"2026-02-16T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.207812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.207875 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.207892 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.207922 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.207943 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:38Z","lastTransitionTime":"2026-02-16T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.231396 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:38 crc kubenswrapper[4698]: E0216 00:07:38.231576 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.271279 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:26:25.072363338 +0000 UTC Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.311158 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.311210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.311222 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.311244 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.311259 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:38Z","lastTransitionTime":"2026-02-16T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.414642 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.414711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.414724 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.414744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.414770 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:38Z","lastTransitionTime":"2026-02-16T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.518105 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.518160 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.518172 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.518193 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.518205 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:38Z","lastTransitionTime":"2026-02-16T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.620591 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.620692 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.620718 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.620754 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.620781 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:38Z","lastTransitionTime":"2026-02-16T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.671502 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.671564 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.671581 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.671608 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.671650 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:38Z","lastTransitionTime":"2026-02-16T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:38 crc kubenswrapper[4698]: E0216 00:07:38.697191 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.701660 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.701725 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.701740 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.701766 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.701782 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:38Z","lastTransitionTime":"2026-02-16T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:38 crc kubenswrapper[4698]: E0216 00:07:38.716686 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.722090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.722149 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.722164 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.722192 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.722209 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:38Z","lastTransitionTime":"2026-02-16T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:38 crc kubenswrapper[4698]: E0216 00:07:38.735418 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.739172 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.739237 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.739253 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.739275 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.739289 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:38Z","lastTransitionTime":"2026-02-16T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:38 crc kubenswrapper[4698]: E0216 00:07:38.752001 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.756183 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.756223 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.756235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.756257 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.756268 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:38Z","lastTransitionTime":"2026-02-16T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:38 crc kubenswrapper[4698]: E0216 00:07:38.767848 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:38 crc kubenswrapper[4698]: E0216 00:07:38.767958 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.769428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.769507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.769525 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.769573 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.769587 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:38Z","lastTransitionTime":"2026-02-16T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.872705 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.872752 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.872761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.872777 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.872788 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:38Z","lastTransitionTime":"2026-02-16T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.975590 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.975647 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.975657 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.975674 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:38 crc kubenswrapper[4698]: I0216 00:07:38.975683 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:38Z","lastTransitionTime":"2026-02-16T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.078428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.078477 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.078488 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.078508 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.078521 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:39Z","lastTransitionTime":"2026-02-16T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.181816 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.181860 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.181871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.181893 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.181905 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:39Z","lastTransitionTime":"2026-02-16T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.231524 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.231538 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:39 crc kubenswrapper[4698]: E0216 00:07:39.231703 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.231546 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:39 crc kubenswrapper[4698]: E0216 00:07:39.231804 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:39 crc kubenswrapper[4698]: E0216 00:07:39.231840 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.272360 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:00:42.577792024 +0000 UTC Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.284637 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.284664 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.284675 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.284694 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.284706 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:39Z","lastTransitionTime":"2026-02-16T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.388077 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.388155 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.388173 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.388206 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.388224 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:39Z","lastTransitionTime":"2026-02-16T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.491839 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.491918 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.491937 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.491976 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.492001 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:39Z","lastTransitionTime":"2026-02-16T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.595842 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.595916 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.595936 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.595967 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.595987 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:39Z","lastTransitionTime":"2026-02-16T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.698727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.698773 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.698787 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.698804 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.698816 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:39Z","lastTransitionTime":"2026-02-16T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.801659 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.801738 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.801759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.801791 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.801810 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:39Z","lastTransitionTime":"2026-02-16T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.905295 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.905365 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.905393 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.905422 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:39 crc kubenswrapper[4698]: I0216 00:07:39.905444 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:39Z","lastTransitionTime":"2026-02-16T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.008057 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.008159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.008190 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.008237 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.008258 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:40Z","lastTransitionTime":"2026-02-16T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.111773 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.111840 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.111858 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.111885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.111982 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:40Z","lastTransitionTime":"2026-02-16T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.215550 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.215664 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.215685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.215712 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.215731 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:40Z","lastTransitionTime":"2026-02-16T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.231666 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:40 crc kubenswrapper[4698]: E0216 00:07:40.231865 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.272527 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 14:43:11.910302445 +0000 UTC Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.319710 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.319782 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.319805 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.319837 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.319861 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:40Z","lastTransitionTime":"2026-02-16T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.423081 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.423127 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.423137 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.423154 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.423167 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:40Z","lastTransitionTime":"2026-02-16T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.525460 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.525537 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.525569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.525601 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.525660 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:40Z","lastTransitionTime":"2026-02-16T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.628150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.628194 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.628204 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.628220 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.628231 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:40Z","lastTransitionTime":"2026-02-16T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.731049 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.731085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.731094 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.731110 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.731120 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:40Z","lastTransitionTime":"2026-02-16T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.833375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.833432 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.833444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.833462 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.833476 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:40Z","lastTransitionTime":"2026-02-16T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.936817 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.936874 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.936886 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.936906 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:40 crc kubenswrapper[4698]: I0216 00:07:40.936918 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:40Z","lastTransitionTime":"2026-02-16T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.040248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.040309 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.040326 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.040354 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.040378 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:41Z","lastTransitionTime":"2026-02-16T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.143760 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.143824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.143847 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.143871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.143889 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:41Z","lastTransitionTime":"2026-02-16T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.231053 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.231107 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.231261 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:41 crc kubenswrapper[4698]: E0216 00:07:41.231316 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:41 crc kubenswrapper[4698]: E0216 00:07:41.231497 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:41 crc kubenswrapper[4698]: E0216 00:07:41.231658 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.250666 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.251066 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.251091 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.251110 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.251123 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:41Z","lastTransitionTime":"2026-02-16T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.257334 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:22Z\\\",\\\"message\\\":\\\"]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 00:07:22.376280 6347 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.273691 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 04:39:05.501492817 +0000 UTC Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.282997 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.299516 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.315713 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.328913 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.341216 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.353923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.353996 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.354012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.354033 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.354064 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:41Z","lastTransitionTime":"2026-02-16T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.357491 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.372305 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87629f1e-d9d5-4302-a92a-f9ac3bad1707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fgr4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.387329 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a39b1-d242-40d4-8179-cf7b71aaeab8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60a206ea8b608682c6898dd9051903dcdf19e39c22d1adba760b43177b474a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2915589b18e0db91214ca20d06f488bbc04f6f8a83bd4ccaaf294f99fc4aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcedd751c83fbf60506332eb79ff3d8e7bfd67099c0bcf36b1acfff96b35bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.399983 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.416125 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.433154 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.452867 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.456807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.456842 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.456855 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.456875 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.456890 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:41Z","lastTransitionTime":"2026-02-16T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.475034 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.498009 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.515470 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b9f244b951d5239f9c5d76101b18f4414a38bc0474f502dd70e3dd0eed00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e24a691e92de7fbfd6fad27ff1c960aa48b9a3fb8be9fd9c03065c360fe3d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckgrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.536061 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.551527 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.559708 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.559765 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.559795 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.559814 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.559828 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:41Z","lastTransitionTime":"2026-02-16T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.662810 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.663116 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.663188 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.663286 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.663348 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:41Z","lastTransitionTime":"2026-02-16T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.766190 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.766249 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.766263 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.766282 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.766300 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:41Z","lastTransitionTime":"2026-02-16T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.868578 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.868760 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.868783 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.868809 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.868827 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:41Z","lastTransitionTime":"2026-02-16T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.972891 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.972943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.972954 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.972976 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:41 crc kubenswrapper[4698]: I0216 00:07:41.972991 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:41Z","lastTransitionTime":"2026-02-16T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.075644 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.075711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.075727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.075755 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.075783 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:42Z","lastTransitionTime":"2026-02-16T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.179464 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.179532 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.179556 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.179589 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.179643 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:42Z","lastTransitionTime":"2026-02-16T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.230957 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:42 crc kubenswrapper[4698]: E0216 00:07:42.231168 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.274385 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 10:46:34.733228391 +0000 UTC Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.282727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.282791 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.282816 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.282847 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.282870 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:42Z","lastTransitionTime":"2026-02-16T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.386101 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.386165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.386180 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.386202 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.386217 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:42Z","lastTransitionTime":"2026-02-16T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.490436 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.490496 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.490515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.490540 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.490558 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:42Z","lastTransitionTime":"2026-02-16T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.593765 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.593802 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.593813 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.593830 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.593842 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:42Z","lastTransitionTime":"2026-02-16T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.697262 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.697696 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.697832 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.697950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.698043 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:42Z","lastTransitionTime":"2026-02-16T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.801965 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.802468 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.802684 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.802901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.803042 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:42Z","lastTransitionTime":"2026-02-16T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.906013 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.906515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.906707 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.906885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:42 crc kubenswrapper[4698]: I0216 00:07:42.907024 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:42Z","lastTransitionTime":"2026-02-16T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.010109 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.010489 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.010658 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.010794 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.010898 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:43Z","lastTransitionTime":"2026-02-16T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.114004 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.114042 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.114050 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.114063 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.114073 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:43Z","lastTransitionTime":"2026-02-16T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.217758 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.218073 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.218159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.218248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.218335 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:43Z","lastTransitionTime":"2026-02-16T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.231449 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:43 crc kubenswrapper[4698]: E0216 00:07:43.231685 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.231455 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:43 crc kubenswrapper[4698]: E0216 00:07:43.231937 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.231449 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:43 crc kubenswrapper[4698]: E0216 00:07:43.232182 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.275356 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 13:59:27.857652746 +0000 UTC Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.321950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.322019 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.322040 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.322067 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.322085 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:43Z","lastTransitionTime":"2026-02-16T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.424847 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.424952 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.424979 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.425023 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.425052 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:43Z","lastTransitionTime":"2026-02-16T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.527449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.527505 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.527517 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.527538 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.527551 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:43Z","lastTransitionTime":"2026-02-16T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.630145 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.630194 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.630218 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.630241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.630259 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:43Z","lastTransitionTime":"2026-02-16T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.732741 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.732794 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.732808 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.732825 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.732858 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:43Z","lastTransitionTime":"2026-02-16T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.835259 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.835324 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.835339 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.835360 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.835374 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:43Z","lastTransitionTime":"2026-02-16T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.938330 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.938411 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.938424 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.938444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:43 crc kubenswrapper[4698]: I0216 00:07:43.938460 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:43Z","lastTransitionTime":"2026-02-16T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.041219 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.041260 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.041272 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.041287 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.041299 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:44Z","lastTransitionTime":"2026-02-16T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.144204 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.144268 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.144279 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.144294 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.144307 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:44Z","lastTransitionTime":"2026-02-16T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.231023 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:44 crc kubenswrapper[4698]: E0216 00:07:44.231210 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.247006 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.247067 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.247089 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.247132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.247160 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:44Z","lastTransitionTime":"2026-02-16T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.276600 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 22:48:16.165765127 +0000 UTC Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.350097 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.350159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.350173 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.350192 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.350205 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:44Z","lastTransitionTime":"2026-02-16T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.453472 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.453519 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.453532 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.453550 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.453566 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:44Z","lastTransitionTime":"2026-02-16T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.556689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.556750 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.556763 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.556786 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.556799 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:44Z","lastTransitionTime":"2026-02-16T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.660541 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.660627 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.660642 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.660666 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.660682 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:44Z","lastTransitionTime":"2026-02-16T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.763974 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.764042 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.764056 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.764074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.764089 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:44Z","lastTransitionTime":"2026-02-16T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.777675 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs\") pod \"network-metrics-daemon-fgr4f\" (UID: \"87629f1e-d9d5-4302-a92a-f9ac3bad1707\") " pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:44 crc kubenswrapper[4698]: E0216 00:07:44.777916 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:07:44 crc kubenswrapper[4698]: E0216 00:07:44.778052 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs podName:87629f1e-d9d5-4302-a92a-f9ac3bad1707 nodeName:}" failed. No retries permitted until 2026-02-16 00:08:16.77801884 +0000 UTC m=+106.435917642 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs") pod "network-metrics-daemon-fgr4f" (UID: "87629f1e-d9d5-4302-a92a-f9ac3bad1707") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.867779 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.867838 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.867861 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.867894 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.867918 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:44Z","lastTransitionTime":"2026-02-16T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.971483 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.971581 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.971665 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.971694 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:44 crc kubenswrapper[4698]: I0216 00:07:44.971713 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:44Z","lastTransitionTime":"2026-02-16T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.074875 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.074935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.074954 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.074978 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.074994 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:45Z","lastTransitionTime":"2026-02-16T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.178660 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.178726 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.178759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.178792 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.178816 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:45Z","lastTransitionTime":"2026-02-16T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.231450 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:45 crc kubenswrapper[4698]: E0216 00:07:45.231970 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.231674 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.231440 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:45 crc kubenswrapper[4698]: E0216 00:07:45.232255 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:45 crc kubenswrapper[4698]: E0216 00:07:45.232454 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.276743 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:36:39.308702022 +0000 UTC Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.281781 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.282069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.282269 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.282476 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.282697 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:45Z","lastTransitionTime":"2026-02-16T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.386637 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.386672 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.386686 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.386706 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.386719 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:45Z","lastTransitionTime":"2026-02-16T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.490365 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.490405 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.490417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.490433 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.490445 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:45Z","lastTransitionTime":"2026-02-16T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.592903 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.593007 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.593028 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.593054 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.593075 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:45Z","lastTransitionTime":"2026-02-16T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.696026 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.696100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.696120 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.696148 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.696171 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:45Z","lastTransitionTime":"2026-02-16T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.714743 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dv2d_69838a3a-c20d-4770-b95f-ab85a265d53c/kube-multus/0.log" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.714844 4698 generic.go:334] "Generic (PLEG): container finished" podID="69838a3a-c20d-4770-b95f-ab85a265d53c" containerID="3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada" exitCode=1 Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.714905 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dv2d" event={"ID":"69838a3a-c20d-4770-b95f-ab85a265d53c","Type":"ContainerDied","Data":"3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada"} Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.715558 4698 scope.go:117] "RemoveContainer" containerID="3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.733652 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.751059 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:45Z\\\",\\\"message\\\":\\\"2026-02-16T00:06:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_42ff4bc6-21b5-485c-bc8f-ad09ba5a20f8\\\\n2026-02-16T00:07:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_42ff4bc6-21b5-485c-bc8f-ad09ba5a20f8 to /host/opt/cni/bin/\\\\n2026-02-16T00:07:00Z [verbose] multus-daemon started\\\\n2026-02-16T00:07:00Z [verbose] Readiness Indicator file check\\\\n2026-02-16T00:07:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.765697 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87629f1e-d9d5-4302-a92a-f9ac3bad1707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fgr4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.782429 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a39b1-d242-40d4-8179-cf7b71aaeab8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60a206ea8b608682c6898dd9051903dcdf19e39c22d1adba760b43177b474a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2915589b18e0db91214ca20d06f488bbc04f6f8a83bd4ccaaf294f99fc4aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcedd751c83fbf60506332eb79ff3d8e7bfd67099c0bcf36b1acfff96b35bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.797672 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.801669 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.801740 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.801756 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.801777 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.801793 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:45Z","lastTransitionTime":"2026-02-16T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.811255 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.827017 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.842954 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.854174 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.877542 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.893979 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b9f244b951d5239f9c5d76101b18f4414a38bc0474f502dd70e3dd0eed00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e24a691e92de7fbfd6fad27ff1c960aa48b9a3fb8be9fd9c03065c360fe3d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckgrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.905221 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.905275 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.905286 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.905306 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.905320 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:45Z","lastTransitionTime":"2026-02-16T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.918108 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.934177 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.948442 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.963839 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:45 crc kubenswrapper[4698]: I0216 00:07:45.991870 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.007842 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.007926 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.007958 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.007982 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.007995 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:46Z","lastTransitionTime":"2026-02-16T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.008511 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.032956 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:22Z\\\",\\\"message\\\":\\\"]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 00:07:22.376280 6347 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.112288 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.112341 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.112354 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.112381 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.112398 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:46Z","lastTransitionTime":"2026-02-16T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.215104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.215191 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.215206 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.215225 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.215240 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:46Z","lastTransitionTime":"2026-02-16T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.230846 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:46 crc kubenswrapper[4698]: E0216 00:07:46.231004 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.277567 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 20:00:22.125572029 +0000 UTC Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.317714 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.317756 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.317765 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.317784 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.317796 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:46Z","lastTransitionTime":"2026-02-16T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.420512 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.420573 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.420588 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.420631 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.420646 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:46Z","lastTransitionTime":"2026-02-16T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.523104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.523194 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.523222 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.523256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.523282 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:46Z","lastTransitionTime":"2026-02-16T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.627576 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.627675 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.627690 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.627716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.627754 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:46Z","lastTransitionTime":"2026-02-16T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.719786 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dv2d_69838a3a-c20d-4770-b95f-ab85a265d53c/kube-multus/0.log" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.719860 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dv2d" event={"ID":"69838a3a-c20d-4770-b95f-ab85a265d53c","Type":"ContainerStarted","Data":"0e92aaa262728b5ab9af6556d29d5558d08822fe1b06333269be4b1ed3a7abc2"} Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.730422 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.730484 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.730497 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.730515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.730527 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:46Z","lastTransitionTime":"2026-02-16T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.741210 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.754477 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.766147 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.782899 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.807482 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b9f244b951d5239f9c5d76101b18f4414a38bc0474f502dd70e3dd0eed00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e24a691e92de7fbfd6fad27ff1c960aa48b9a3fb8be9fd9c03065c360fe3d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckgrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.826609 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.833052 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.833124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.833139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.833156 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.833169 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:46Z","lastTransitionTime":"2026-02-16T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.844515 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.860389 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.877170 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.902763 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.915280 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.935688 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:22Z\\\",\\\"message\\\":\\\"]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 00:07:22.376280 6347 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.936762 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.936835 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.936859 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.936891 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.936917 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:46Z","lastTransitionTime":"2026-02-16T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.949658 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.967911 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e92aaa262728b5ab9af6556d29d5558d08822fe1b06333269be4b1ed3a7abc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:45Z\\\",\\\"message\\\":\\\"2026-02-16T00:06:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_42ff4bc6-21b5-485c-bc8f-ad09ba5a20f8\\\\n2026-02-16T00:07:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_42ff4bc6-21b5-485c-bc8f-ad09ba5a20f8 to /host/opt/cni/bin/\\\\n2026-02-16T00:07:00Z [verbose] multus-daemon started\\\\n2026-02-16T00:07:00Z [verbose] Readiness Indicator file check\\\\n2026-02-16T00:07:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.980554 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87629f1e-d9d5-4302-a92a-f9ac3bad1707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fgr4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:46 crc kubenswrapper[4698]: I0216 00:07:46.997531 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a39b1-d242-40d4-8179-cf7b71aaeab8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60a206ea8b608682c6898dd9051903dcdf19e39c22d1adba760b43177b474a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2915589b18e0db91214ca20d06f488bbc04f6f8a83bd4ccaaf294f99fc4aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcedd751c83fbf60506332eb79ff3d8e7bfd67099c0bcf36b1acfff96b35bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.011872 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.025026 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.039871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.039957 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.040005 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.040029 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.040046 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:47Z","lastTransitionTime":"2026-02-16T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.144179 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.144463 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.144485 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.144514 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.144533 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:47Z","lastTransitionTime":"2026-02-16T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.231936 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.232082 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:47 crc kubenswrapper[4698]: E0216 00:07:47.232170 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:47 crc kubenswrapper[4698]: E0216 00:07:47.232364 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.232471 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:47 crc kubenswrapper[4698]: E0216 00:07:47.232716 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.247940 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.247998 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.248012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.248036 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.248051 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:47Z","lastTransitionTime":"2026-02-16T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.278380 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 19:06:26.807756863 +0000 UTC Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.351109 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.351179 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.351190 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.351222 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.351237 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:47Z","lastTransitionTime":"2026-02-16T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.455603 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.455705 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.455725 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.455746 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.455765 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:47Z","lastTransitionTime":"2026-02-16T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.558787 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.558842 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.558854 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.558872 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.558886 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:47Z","lastTransitionTime":"2026-02-16T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.662595 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.662713 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.662740 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.662765 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.662786 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:47Z","lastTransitionTime":"2026-02-16T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.765915 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.765968 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.765984 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.766002 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.766015 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:47Z","lastTransitionTime":"2026-02-16T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.869874 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.870385 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.870543 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.870741 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.870901 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:47Z","lastTransitionTime":"2026-02-16T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.974808 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.975213 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.975387 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.975600 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:47 crc kubenswrapper[4698]: I0216 00:07:47.975816 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:47Z","lastTransitionTime":"2026-02-16T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.080289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.080353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.080367 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.080390 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.080404 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:48Z","lastTransitionTime":"2026-02-16T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.183425 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.183476 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.183488 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.183509 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.183524 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:48Z","lastTransitionTime":"2026-02-16T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.230905 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:48 crc kubenswrapper[4698]: E0216 00:07:48.231967 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.232106 4698 scope.go:117] "RemoveContainer" containerID="42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.278663 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:11:36.176222507 +0000 UTC Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.286729 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.286781 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.286804 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.286829 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.286849 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:48Z","lastTransitionTime":"2026-02-16T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.390686 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.390734 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.390744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.390761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.390775 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:48Z","lastTransitionTime":"2026-02-16T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.494351 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.494415 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.494435 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.494463 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.494484 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:48Z","lastTransitionTime":"2026-02-16T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.597565 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.597661 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.597687 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.597712 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.597728 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:48Z","lastTransitionTime":"2026-02-16T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.701090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.701149 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.701169 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.701198 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.701219 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:48Z","lastTransitionTime":"2026-02-16T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.733081 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovnkube-controller/2.log" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.737102 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerStarted","Data":"651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27"} Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.739169 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.759437 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.774322 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b9f244b951d5239f9c5d76101b18f4414a38bc0474f502dd70e3dd0eed00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e24a691e92de7fbfd6fad27ff1c960aa48b9a3fb8be9fd9c03065c360fe3d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckgrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.796763 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.804221 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.804262 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.804277 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.804297 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.804314 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:48Z","lastTransitionTime":"2026-02-16T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.825177 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.842734 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.855934 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.876403 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.887701 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.907006 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.907062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.907078 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.907100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.907116 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:48Z","lastTransitionTime":"2026-02-16T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.908388 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:22Z\\\",\\\"message\\\":\\\"]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 00:07:22.376280 6347 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.924133 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.939661 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e92aaa262728b5ab9af6556d29d5558d08822fe1b06333269be4b1ed3a7abc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:45Z\\\",\\\"message\\\":\\\"2026-02-16T00:06:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_42ff4bc6-21b5-485c-bc8f-ad09ba5a20f8\\\\n2026-02-16T00:07:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_42ff4bc6-21b5-485c-bc8f-ad09ba5a20f8 to /host/opt/cni/bin/\\\\n2026-02-16T00:07:00Z [verbose] multus-daemon started\\\\n2026-02-16T00:07:00Z [verbose] Readiness Indicator file check\\\\n2026-02-16T00:07:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.949855 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87629f1e-d9d5-4302-a92a-f9ac3bad1707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fgr4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.963933 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a39b1-d242-40d4-8179-cf7b71aaeab8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60a206ea8b608682c6898dd9051903dcdf19e39c22d1adba760b43177b474a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2915589b18e0db91214ca20d06f488bbc04f6f8a83bd4ccaaf294f99fc4aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcedd751c83fbf60506332eb79ff3d8e7bfd67099c0bcf36b1acfff96b35bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.976907 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.987311 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:48 crc kubenswrapper[4698]: I0216 00:07:48.997769 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.007690 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:49Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.009608 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.009657 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.009670 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.009689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.009702 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:49Z","lastTransitionTime":"2026-02-16T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.020733 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:49Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.113360 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.113429 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.113440 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.113457 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.113471 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:49Z","lastTransitionTime":"2026-02-16T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.131416 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.131484 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.131500 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.131516 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.131528 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:49Z","lastTransitionTime":"2026-02-16T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:49 crc kubenswrapper[4698]: E0216 00:07:49.150252 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:49Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.155055 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.155104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.155121 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.155141 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.155153 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:49Z","lastTransitionTime":"2026-02-16T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:49 crc kubenswrapper[4698]: E0216 00:07:49.169787 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:49Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.174655 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.174728 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.174751 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.174801 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.174828 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:49Z","lastTransitionTime":"2026-02-16T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:49 crc kubenswrapper[4698]: E0216 00:07:49.196752 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:49Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.203754 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.203832 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.203854 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.203883 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.203904 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:49Z","lastTransitionTime":"2026-02-16T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:49 crc kubenswrapper[4698]: E0216 00:07:49.222509 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:49Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.231996 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.232111 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.232183 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:49 crc kubenswrapper[4698]: E0216 00:07:49.232212 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:49 crc kubenswrapper[4698]: E0216 00:07:49.232325 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.232343 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.232455 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.232479 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.232546 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:49 crc kubenswrapper[4698]: E0216 00:07:49.232586 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.232567 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:49Z","lastTransitionTime":"2026-02-16T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:49 crc kubenswrapper[4698]: E0216 00:07:49.248761 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:49Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:49 crc kubenswrapper[4698]: E0216 00:07:49.248940 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.250936 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.251002 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.251018 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.251040 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.251083 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:49Z","lastTransitionTime":"2026-02-16T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.279366 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:17:35.819875739 +0000 UTC Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.355009 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.355106 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.355128 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.355158 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.355178 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:49Z","lastTransitionTime":"2026-02-16T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.458776 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.458866 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.458892 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.458922 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.458942 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:49Z","lastTransitionTime":"2026-02-16T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.561995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.562544 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.562563 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.562586 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.562604 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:49Z","lastTransitionTime":"2026-02-16T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.666162 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.666231 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.666254 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.666284 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.666306 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:49Z","lastTransitionTime":"2026-02-16T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.744805 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovnkube-controller/3.log" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.745955 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovnkube-controller/2.log" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.750563 4698 generic.go:334] "Generic (PLEG): container finished" podID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerID="651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27" exitCode=1 Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.750653 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerDied","Data":"651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27"} Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.750738 4698 scope.go:117] "RemoveContainer" containerID="42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.751937 4698 scope.go:117] "RemoveContainer" containerID="651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27" Feb 16 00:07:49 crc kubenswrapper[4698]: E0216 00:07:49.752234 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.769852 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.769902 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.769920 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.769939 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.769953 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:49Z","lastTransitionTime":"2026-02-16T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.770860 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:49Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.791143 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:49Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.807651 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:49Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.831439 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:49Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.847086 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:49Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.870318 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:49Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.873335 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.873408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.873430 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.873461 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.873481 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:49Z","lastTransitionTime":"2026-02-16T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.886645 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b9f244b951d5239f9c5d76101b18f4414a38bc0474f502dd70e3dd0eed00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e24a691e92de7fbfd6fad27ff1c960aa48b9a3fb8be9fd9c03065c360fe3d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckgrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:49Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.910960 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:49Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.931448 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:49Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.958898 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e73f3e42934ded5a33aecc9b660f5de787120dae1a31e582144170b7cd7a9a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:22Z\\\",\\\"message\\\":\\\"]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 00:07:22.376280 6347 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0216 00:07:49.201007 6720 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 00:07:49.200759 6720 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 1.953741ms\\\\nI0216 00:07:49.201008 6720 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0216 00:07:49.201027 6720 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 00:07:49.201033 6720 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nF0216 00:07:49.201039 6720 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Int\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:49Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.977759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.977815 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.977831 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.977855 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.977872 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:49Z","lastTransitionTime":"2026-02-16T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:49 crc kubenswrapper[4698]: I0216 00:07:49.990861 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:49Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.011016 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:50Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.033854 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:50Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.056269 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:50Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.074897 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:50Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.080940 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.080987 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.080997 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.081013 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.081031 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:50Z","lastTransitionTime":"2026-02-16T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.092816 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e92aaa262728b5ab9af6556d29d5558d08822fe1b06333269be4b1ed3a7abc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:45Z\\\",\\\"message\\\":\\\"2026-02-16T00:06:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_42ff4bc6-21b5-485c-bc8f-ad09ba5a20f8\\\\n2026-02-16T00:07:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_42ff4bc6-21b5-485c-bc8f-ad09ba5a20f8 to /host/opt/cni/bin/\\\\n2026-02-16T00:07:00Z [verbose] multus-daemon started\\\\n2026-02-16T00:07:00Z [verbose] Readiness Indicator file check\\\\n2026-02-16T00:07:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:50Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.112448 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87629f1e-d9d5-4302-a92a-f9ac3bad1707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fgr4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:50Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.134460 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a39b1-d242-40d4-8179-cf7b71aaeab8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60a206ea8b608682c6898dd9051903dcdf19e39c22d1adba760b43177b474a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2915589b18e0db91214ca20d06f488bbc04f6f8a83bd4ccaaf294f99fc4aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcedd751c83fbf60506332eb79ff3d8e7bfd67099c0bcf36b1acfff96b35bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:50Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.184145 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.184203 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.184216 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.184233 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.184247 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:50Z","lastTransitionTime":"2026-02-16T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.230942 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:50 crc kubenswrapper[4698]: E0216 00:07:50.231185 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.279959 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:03:10.802351384 +0000 UTC Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.286852 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.286901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.286918 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.286943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.286963 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:50Z","lastTransitionTime":"2026-02-16T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.390734 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.391153 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.391312 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.391464 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.391609 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:50Z","lastTransitionTime":"2026-02-16T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.494963 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.495018 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.495028 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.495044 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.495057 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:50Z","lastTransitionTime":"2026-02-16T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.598287 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.598784 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.598993 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.599166 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.599310 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:50Z","lastTransitionTime":"2026-02-16T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.703420 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.703501 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.703519 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.703546 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.703567 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:50Z","lastTransitionTime":"2026-02-16T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.757583 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovnkube-controller/3.log" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.763964 4698 scope.go:117] "RemoveContainer" containerID="651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27" Feb 16 00:07:50 crc kubenswrapper[4698]: E0216 00:07:50.764289 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.785388 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:50Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.807015 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.807098 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.807124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.807156 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.807180 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:50Z","lastTransitionTime":"2026-02-16T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.812059 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:50Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.835773 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b9f244b951d5239f9c5d76101b18f4414a38bc0474f502dd70e3dd0eed00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e24a691e92de7fbfd6fad27ff1c960aa48b9a3fb8be9fd9c03065c360fe3d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckgrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:50Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.860063 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:50Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.881252 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:50Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.906245 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:50Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.911442 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.911780 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.911964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.912161 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.912381 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:50Z","lastTransitionTime":"2026-02-16T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.935366 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:50Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.950498 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:50Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.975647 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0216 00:07:49.201007 6720 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 00:07:49.200759 6720 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 1.953741ms\\\\nI0216 00:07:49.201008 6720 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0216 00:07:49.201027 6720 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 00:07:49.201033 6720 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nF0216 00:07:49.201039 6720 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Int\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:50Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:50 crc kubenswrapper[4698]: I0216 00:07:50.989546 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:50Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.003122 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.015899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.015960 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.015977 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.016003 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.016019 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:51Z","lastTransitionTime":"2026-02-16T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.017205 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e92aaa262728b5ab9af6556d29d5558d08822fe1b06333269be4b1ed3a7abc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:45Z\\\",\\\"message\\\":\\\"2026-02-16T00:06:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_42ff4bc6-21b5-485c-bc8f-ad09ba5a20f8\\\\n2026-02-16T00:07:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_42ff4bc6-21b5-485c-bc8f-ad09ba5a20f8 to /host/opt/cni/bin/\\\\n2026-02-16T00:07:00Z [verbose] multus-daemon started\\\\n2026-02-16T00:07:00Z [verbose] Readiness Indicator file check\\\\n2026-02-16T00:07:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.028761 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87629f1e-d9d5-4302-a92a-f9ac3bad1707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fgr4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.039669 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a39b1-d242-40d4-8179-cf7b71aaeab8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60a206ea8b608682c6898dd9051903dcdf19e39c22d1adba760b43177b474a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2915589b18e0db91214ca20d06f488bbc04f6f8a83bd4ccaaf294f99fc4aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcedd751c83fbf60506332eb79ff3d8e7bfd67099c0bcf36b1acfff96b35bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.054045 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.076503 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.091015 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.104089 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.119314 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.119374 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.119390 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.119416 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.119437 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:51Z","lastTransitionTime":"2026-02-16T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.222960 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.223018 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.223032 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.223053 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.223072 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:51Z","lastTransitionTime":"2026-02-16T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.231355 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.231372 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:51 crc kubenswrapper[4698]: E0216 00:07:51.231486 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.231631 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:51 crc kubenswrapper[4698]: E0216 00:07:51.231810 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:51 crc kubenswrapper[4698]: E0216 00:07:51.231930 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.256239 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.272006 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.281020 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 23:22:42.132215201 +0000 UTC Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.295552 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0216 00:07:49.201007 6720 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 00:07:49.200759 6720 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 1.953741ms\\\\nI0216 00:07:49.201008 6720 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0216 00:07:49.201027 6720 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 00:07:49.201033 6720 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nF0216 00:07:49.201039 6720 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Int\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.314577 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.325849 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.325890 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.325901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.325919 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.325929 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:51Z","lastTransitionTime":"2026-02-16T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.331875 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.351134 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e92aaa262728b5ab9af6556d29d5558d08822fe1b06333269be4b1ed3a7abc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:45Z\\\",\\\"message\\\":\\\"2026-02-16T00:06:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_42ff4bc6-21b5-485c-bc8f-ad09ba5a20f8\\\\n2026-02-16T00:07:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_42ff4bc6-21b5-485c-bc8f-ad09ba5a20f8 to /host/opt/cni/bin/\\\\n2026-02-16T00:07:00Z [verbose] multus-daemon started\\\\n2026-02-16T00:07:00Z [verbose] Readiness Indicator file check\\\\n2026-02-16T00:07:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.366045 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87629f1e-d9d5-4302-a92a-f9ac3bad1707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fgr4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.380328 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a39b1-d242-40d4-8179-cf7b71aaeab8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60a206ea8b608682c6898dd9051903dcdf19e39c22d1adba760b43177b474a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2915589b18e0db91214ca20d06f488bbc04f6f8a83bd4ccaaf294f99fc4aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcedd751c83fbf60506332eb79ff3d8e7bfd67099c0bcf36b1acfff96b35bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.405008 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.421075 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.428246 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.428293 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.428305 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.428324 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.428483 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:51Z","lastTransitionTime":"2026-02-16T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.435513 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.448557 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.464718 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.484407 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.499168 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b9f244b951d5239f9c5d76101b18f4414a38bc0474f502dd70e3dd0eed00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e24a691e92de7fbfd6fad27ff1c960aa48b9a3fb8be9fd9c03065c360fe3d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckgrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.521670 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.532641 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.532677 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.532687 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.532701 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.532711 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:51Z","lastTransitionTime":"2026-02-16T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.538405 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.560268 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.635795 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.635868 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.635886 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.635915 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.635933 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:51Z","lastTransitionTime":"2026-02-16T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.739417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.739474 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.739486 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.739506 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.739519 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:51Z","lastTransitionTime":"2026-02-16T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.843134 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.843195 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.843214 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.843241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.843265 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:51Z","lastTransitionTime":"2026-02-16T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.945825 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.945877 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.945894 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.945917 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:51 crc kubenswrapper[4698]: I0216 00:07:51.945936 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:51Z","lastTransitionTime":"2026-02-16T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.048323 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.048396 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.048423 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.048450 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.048474 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:52Z","lastTransitionTime":"2026-02-16T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.151874 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.151937 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.151955 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.151977 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.151995 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:52Z","lastTransitionTime":"2026-02-16T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.231732 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:52 crc kubenswrapper[4698]: E0216 00:07:52.231956 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.255431 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.255519 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.255540 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.255563 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.255582 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:52Z","lastTransitionTime":"2026-02-16T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.281939 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 10:39:42.081478213 +0000 UTC Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.358255 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.358406 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.358437 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.358470 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.358494 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:52Z","lastTransitionTime":"2026-02-16T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.462104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.462161 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.462179 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.462206 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.462224 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:52Z","lastTransitionTime":"2026-02-16T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.565408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.565491 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.565514 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.565540 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.565557 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:52Z","lastTransitionTime":"2026-02-16T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.668943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.669022 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.669039 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.669066 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.669084 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:52Z","lastTransitionTime":"2026-02-16T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.771064 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.771111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.771130 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.771145 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.771157 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:52Z","lastTransitionTime":"2026-02-16T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.874470 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.874533 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.874551 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.874577 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.874596 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:52Z","lastTransitionTime":"2026-02-16T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.977908 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.978006 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.978028 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.978136 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:52 crc kubenswrapper[4698]: I0216 00:07:52.978158 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:52Z","lastTransitionTime":"2026-02-16T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.080965 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.081345 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.081428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.081504 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.081563 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:53Z","lastTransitionTime":"2026-02-16T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.185824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.185889 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.185908 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.185932 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.185950 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:53Z","lastTransitionTime":"2026-02-16T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.231919 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.231955 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.231919 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:53 crc kubenswrapper[4698]: E0216 00:07:53.232101 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:53 crc kubenswrapper[4698]: E0216 00:07:53.232201 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:53 crc kubenswrapper[4698]: E0216 00:07:53.232289 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.282434 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 13:26:01.252123654 +0000 UTC Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.288987 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.289047 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.289071 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.289114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.289136 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:53Z","lastTransitionTime":"2026-02-16T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.392224 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.392279 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.392297 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.392325 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.392343 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:53Z","lastTransitionTime":"2026-02-16T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.496590 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.496706 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.496736 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.496765 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.496785 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:53Z","lastTransitionTime":"2026-02-16T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.600277 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.600370 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.600393 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.600427 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.600452 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:53Z","lastTransitionTime":"2026-02-16T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.703907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.703994 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.704033 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.704075 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.704100 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:53Z","lastTransitionTime":"2026-02-16T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.810685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.810798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.810833 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.810868 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.810891 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:53Z","lastTransitionTime":"2026-02-16T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.913945 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.914010 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.914029 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.914053 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:53 crc kubenswrapper[4698]: I0216 00:07:53.914072 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:53Z","lastTransitionTime":"2026-02-16T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.017163 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.017234 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.017252 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.017278 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.017297 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:54Z","lastTransitionTime":"2026-02-16T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.120407 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.120460 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.120474 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.120495 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.120511 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:54Z","lastTransitionTime":"2026-02-16T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.223820 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.223891 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.223909 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.223933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.223951 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:54Z","lastTransitionTime":"2026-02-16T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.231301 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:54 crc kubenswrapper[4698]: E0216 00:07:54.231547 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.283031 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 07:17:52.850014566 +0000 UTC Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.327074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.327146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.327199 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.327226 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.327245 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:54Z","lastTransitionTime":"2026-02-16T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.430465 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.430539 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.430561 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.430585 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.430602 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:54Z","lastTransitionTime":"2026-02-16T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.533475 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.533560 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.533583 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.533609 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.533679 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:54Z","lastTransitionTime":"2026-02-16T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.637439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.637536 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.637551 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.637567 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.637579 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:54Z","lastTransitionTime":"2026-02-16T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.740890 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.740956 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.740974 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.741000 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.741020 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:54Z","lastTransitionTime":"2026-02-16T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.844184 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.844225 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.844235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.844253 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.844264 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:54Z","lastTransitionTime":"2026-02-16T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.947387 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.947449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.947469 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.947496 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:54 crc kubenswrapper[4698]: I0216 00:07:54.947518 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:54Z","lastTransitionTime":"2026-02-16T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.050382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.050440 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.050461 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.050481 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.050493 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:55Z","lastTransitionTime":"2026-02-16T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.153721 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.153803 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.153820 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.153848 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.153866 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:55Z","lastTransitionTime":"2026-02-16T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.198516 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.198745 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:55 crc kubenswrapper[4698]: E0216 00:07:55.198766 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:59.198733767 +0000 UTC m=+148.856632529 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.198811 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.198844 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:55 crc kubenswrapper[4698]: E0216 00:07:55.198870 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.198884 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:55 crc kubenswrapper[4698]: E0216 00:07:55.198936 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 00:08:59.198917443 +0000 UTC m=+148.856816245 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:07:55 crc kubenswrapper[4698]: E0216 00:07:55.199041 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:07:55 crc kubenswrapper[4698]: E0216 00:07:55.199060 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:07:55 crc kubenswrapper[4698]: E0216 00:07:55.199076 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:07:55 crc kubenswrapper[4698]: E0216 00:07:55.199234 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 00:08:59.199098798 +0000 UTC m=+148.856997560 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:07:55 crc kubenswrapper[4698]: E0216 00:07:55.199297 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:07:55 crc kubenswrapper[4698]: E0216 00:07:55.199448 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 00:08:59.199416248 +0000 UTC m=+148.857315040 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:07:55 crc kubenswrapper[4698]: E0216 00:07:55.199999 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:07:55 crc kubenswrapper[4698]: E0216 00:07:55.200066 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:07:55 crc kubenswrapper[4698]: E0216 00:07:55.200089 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:07:55 crc kubenswrapper[4698]: E0216 00:07:55.200253 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 00:08:59.200211484 +0000 UTC m=+148.858110426 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.231554 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.231566 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.231692 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:55 crc kubenswrapper[4698]: E0216 00:07:55.231736 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:55 crc kubenswrapper[4698]: E0216 00:07:55.231898 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:55 crc kubenswrapper[4698]: E0216 00:07:55.232055 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.256570 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.256636 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.256646 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.256666 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.256681 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:55Z","lastTransitionTime":"2026-02-16T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.283829 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 07:28:44.080811171 +0000 UTC Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.359099 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.359167 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.359187 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.359213 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.359232 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:55Z","lastTransitionTime":"2026-02-16T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.462576 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.462669 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.462693 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.462721 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.462738 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:55Z","lastTransitionTime":"2026-02-16T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.565943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.566021 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.566045 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.566074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.566096 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:55Z","lastTransitionTime":"2026-02-16T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.673346 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.673428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.673454 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.673495 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.673533 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:55Z","lastTransitionTime":"2026-02-16T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.777495 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.777558 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.777576 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.777600 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.777656 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:55Z","lastTransitionTime":"2026-02-16T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.881332 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.881425 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.881444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.881470 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.881489 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:55Z","lastTransitionTime":"2026-02-16T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.984238 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.984299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.984316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.984341 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:55 crc kubenswrapper[4698]: I0216 00:07:55.984360 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:55Z","lastTransitionTime":"2026-02-16T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.087833 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.087909 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.087933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.087966 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.087983 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:56Z","lastTransitionTime":"2026-02-16T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.192360 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.192424 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.192436 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.192457 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.192472 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:56Z","lastTransitionTime":"2026-02-16T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.231202 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:56 crc kubenswrapper[4698]: E0216 00:07:56.231404 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.284247 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 19:42:29.581776371 +0000 UTC Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.296166 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.296230 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.296247 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.296271 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.296291 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:56Z","lastTransitionTime":"2026-02-16T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.399802 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.399883 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.399907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.399938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.399966 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:56Z","lastTransitionTime":"2026-02-16T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.503653 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.503718 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.503738 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.503762 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.503781 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:56Z","lastTransitionTime":"2026-02-16T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.607156 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.607200 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.607214 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.607233 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.607245 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:56Z","lastTransitionTime":"2026-02-16T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.710719 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.710793 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.710819 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.710851 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.710875 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:56Z","lastTransitionTime":"2026-02-16T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.813644 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.813700 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.813717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.813740 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.813757 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:56Z","lastTransitionTime":"2026-02-16T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.916678 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.916742 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.916758 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.916781 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:56 crc kubenswrapper[4698]: I0216 00:07:56.916801 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:56Z","lastTransitionTime":"2026-02-16T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.020297 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.020354 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.020370 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.020395 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.020412 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:57Z","lastTransitionTime":"2026-02-16T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.123596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.123661 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.123671 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.123685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.123697 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:57Z","lastTransitionTime":"2026-02-16T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.226684 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.226759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.226781 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.226811 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.226836 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:57Z","lastTransitionTime":"2026-02-16T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.230931 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.231005 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:57 crc kubenswrapper[4698]: E0216 00:07:57.231083 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.231007 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:57 crc kubenswrapper[4698]: E0216 00:07:57.231189 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:57 crc kubenswrapper[4698]: E0216 00:07:57.231429 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.285004 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:18:42.417187506 +0000 UTC Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.330698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.330774 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.330793 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.331212 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.331238 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:57Z","lastTransitionTime":"2026-02-16T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.434921 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.435159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.435192 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.435227 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.435254 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:57Z","lastTransitionTime":"2026-02-16T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.538567 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.538676 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.538709 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.538740 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.538767 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:57Z","lastTransitionTime":"2026-02-16T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.641503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.641590 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.641658 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.641696 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.641720 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:57Z","lastTransitionTime":"2026-02-16T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.745882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.745970 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.745988 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.746041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.746059 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:57Z","lastTransitionTime":"2026-02-16T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.849015 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.849071 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.849086 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.849106 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.849122 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:57Z","lastTransitionTime":"2026-02-16T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.952061 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.952112 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.952122 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.952155 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:57 crc kubenswrapper[4698]: I0216 00:07:57.952166 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:57Z","lastTransitionTime":"2026-02-16T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.055384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.055454 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.055468 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.055498 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.055513 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:58Z","lastTransitionTime":"2026-02-16T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.159099 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.159202 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.159228 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.159264 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.159288 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:58Z","lastTransitionTime":"2026-02-16T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.231317 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:07:58 crc kubenswrapper[4698]: E0216 00:07:58.231599 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.263069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.263130 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.263150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.263178 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.263199 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:58Z","lastTransitionTime":"2026-02-16T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.286002 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 05:22:06.447060218 +0000 UTC Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.366695 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.366761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.366784 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.366813 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.366831 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:58Z","lastTransitionTime":"2026-02-16T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.473299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.473459 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.473481 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.473507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.473556 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:58Z","lastTransitionTime":"2026-02-16T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.577442 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.577520 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.577539 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.577568 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.577589 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:58Z","lastTransitionTime":"2026-02-16T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.681152 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.681324 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.681353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.681434 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.681460 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:58Z","lastTransitionTime":"2026-02-16T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.785642 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.785715 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.785728 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.785750 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.785766 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:58Z","lastTransitionTime":"2026-02-16T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.889778 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.889844 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.889859 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.889882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.889901 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:58Z","lastTransitionTime":"2026-02-16T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.994269 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.994363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.994390 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.994424 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:58 crc kubenswrapper[4698]: I0216 00:07:58.994451 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:58Z","lastTransitionTime":"2026-02-16T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.097331 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.097383 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.097398 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.097415 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.097429 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:59Z","lastTransitionTime":"2026-02-16T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.201204 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.201310 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.201332 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.201356 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.201374 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:59Z","lastTransitionTime":"2026-02-16T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.231605 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.231787 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.231835 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:07:59 crc kubenswrapper[4698]: E0216 00:07:59.231999 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:07:59 crc kubenswrapper[4698]: E0216 00:07:59.232233 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:07:59 crc kubenswrapper[4698]: E0216 00:07:59.232365 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.247264 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.286465 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 18:51:02.247514245 +0000 UTC Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.304528 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.304562 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.304572 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.304591 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.304603 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:59Z","lastTransitionTime":"2026-02-16T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.407382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.407424 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.407433 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.407450 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.407461 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:59Z","lastTransitionTime":"2026-02-16T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.509270 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.509317 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.509329 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.509344 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.509355 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:59Z","lastTransitionTime":"2026-02-16T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:59 crc kubenswrapper[4698]: E0216 00:07:59.524105 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.529739 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.529780 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.529792 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.529808 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.529823 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:59Z","lastTransitionTime":"2026-02-16T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:59 crc kubenswrapper[4698]: E0216 00:07:59.551386 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.557288 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.557357 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.557394 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.557424 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.557449 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:59Z","lastTransitionTime":"2026-02-16T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:59 crc kubenswrapper[4698]: E0216 00:07:59.580173 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.586284 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.586350 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.586363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.586381 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.586410 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:59Z","lastTransitionTime":"2026-02-16T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:59 crc kubenswrapper[4698]: E0216 00:07:59.606536 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.611530 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.611596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.611641 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.611672 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.611694 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:59Z","lastTransitionTime":"2026-02-16T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:59 crc kubenswrapper[4698]: E0216 00:07:59.629374 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 00:07:59 crc kubenswrapper[4698]: E0216 00:07:59.629532 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.632292 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.632334 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.632347 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.632367 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.632380 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:59Z","lastTransitionTime":"2026-02-16T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.735701 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.735755 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.735767 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.735790 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.735803 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:59Z","lastTransitionTime":"2026-02-16T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.839564 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.839666 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.839684 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.839713 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.839734 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:59Z","lastTransitionTime":"2026-02-16T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.943328 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.943386 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.943399 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.943416 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:07:59 crc kubenswrapper[4698]: I0216 00:07:59.943434 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:07:59Z","lastTransitionTime":"2026-02-16T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.046975 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.047047 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.047066 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.047093 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.047113 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:00Z","lastTransitionTime":"2026-02-16T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.150800 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.150881 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.150902 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.150933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.150955 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:00Z","lastTransitionTime":"2026-02-16T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.230985 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:00 crc kubenswrapper[4698]: E0216 00:08:00.231212 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.258750 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.258845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.258865 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.258890 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.258912 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:00Z","lastTransitionTime":"2026-02-16T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.287170 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:36:17.925925899 +0000 UTC Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.362369 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.362410 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.362421 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.362440 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.362453 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:00Z","lastTransitionTime":"2026-02-16T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.465685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.465739 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.465754 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.465771 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.465787 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:00Z","lastTransitionTime":"2026-02-16T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.570273 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.570347 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.570366 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.570395 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.570416 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:00Z","lastTransitionTime":"2026-02-16T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.674538 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.674653 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.674674 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.674701 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.674721 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:00Z","lastTransitionTime":"2026-02-16T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.778014 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.778069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.778084 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.778104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.778116 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:00Z","lastTransitionTime":"2026-02-16T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.880701 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.880779 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.880801 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.880824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.880843 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:00Z","lastTransitionTime":"2026-02-16T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.983268 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.983313 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.983325 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.983345 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:00 crc kubenswrapper[4698]: I0216 00:08:00.983357 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:00Z","lastTransitionTime":"2026-02-16T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.086780 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.087258 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.087292 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.087310 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.087322 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:01Z","lastTransitionTime":"2026-02-16T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.190690 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.190768 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.190818 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.190837 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.190848 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:01Z","lastTransitionTime":"2026-02-16T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.231698 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.231812 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:01 crc kubenswrapper[4698]: E0216 00:08:01.231936 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.231994 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:01 crc kubenswrapper[4698]: E0216 00:08:01.232144 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:01 crc kubenswrapper[4698]: E0216 00:08:01.232212 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.255275 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.272734 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.288010 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 05:31:11.099109441 +0000 UTC Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.288914 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.294269 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.294336 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.294366 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.294399 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.294428 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:01Z","lastTransitionTime":"2026-02-16T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.316237 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.335093 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.358422 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.372635 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.398338 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.398398 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.398417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.398446 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.398465 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:01Z","lastTransitionTime":"2026-02-16T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.400054 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.417731 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b9f244b951d5239f9c5d76101b18f4414a38bc0474f502dd70e3dd0eed00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e24a691e92de7fbfd6fad27ff1c960aa48b9a3fb8be9fd9c03065c360fe3d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckgrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.440423 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.458517 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.482585 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0216 00:07:49.201007 6720 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 00:07:49.200759 6720 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 1.953741ms\\\\nI0216 00:07:49.201008 6720 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0216 00:07:49.201027 6720 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 00:07:49.201033 6720 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nF0216 00:07:49.201039 6720 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Int\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.496938 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87629f1e-d9d5-4302-a92a-f9ac3bad1707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fgr4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.502056 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.502105 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.502117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.502138 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.502152 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:01Z","lastTransitionTime":"2026-02-16T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.511975 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"542df46c-6d93-45b6-bf8d-d54d4e9febfe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d514d688c0cca450cc92f9bc0c0c996bee10cf7d03b1c4b2e30d5afd36db85df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b755a5466b8fefa2667afc5e8cc0a6f22f583ce8800b10f8836d9957364d5315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b755a5466b8fefa2667afc5e8cc0a6f22f583ce8800b10f8836d9957364d5315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.526908 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a39b1-d242-40d4-8179-cf7b71aaeab8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60a206ea8b608682c6898dd9051903dcdf19e39c22d1adba760b43177b474a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2915589b18e0db91214ca20d06f488bbc04f6f8a83bd4ccaaf294f99fc4aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcedd751c83fbf60506332eb79ff3d8e7bfd67099c0bcf36b1acfff96b35bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.542399 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.555867 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.569238 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.583931 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e92aaa262728b5ab9af6556d29d5558d08822fe1b06333269be4b1ed3a7abc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:45Z\\\",\\\"message\\\":\\\"2026-02-16T00:06:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_42ff4bc6-21b5-485c-bc8f-ad09ba5a20f8\\\\n2026-02-16T00:07:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_42ff4bc6-21b5-485c-bc8f-ad09ba5a20f8 to /host/opt/cni/bin/\\\\n2026-02-16T00:07:00Z [verbose] multus-daemon started\\\\n2026-02-16T00:07:00Z [verbose] Readiness Indicator file check\\\\n2026-02-16T00:07:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.605097 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.605153 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.605172 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.605199 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.605217 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:01Z","lastTransitionTime":"2026-02-16T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.708500 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.708548 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.708576 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.708593 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.708603 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:01Z","lastTransitionTime":"2026-02-16T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.811595 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.812080 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.812170 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.812300 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.812406 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:01Z","lastTransitionTime":"2026-02-16T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.915339 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.915401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.915423 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.915480 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:01 crc kubenswrapper[4698]: I0216 00:08:01.915501 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:01Z","lastTransitionTime":"2026-02-16T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.018729 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.018774 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.018788 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.018808 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.018819 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:02Z","lastTransitionTime":"2026-02-16T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.121902 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.121982 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.121996 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.122019 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.122034 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:02Z","lastTransitionTime":"2026-02-16T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.224491 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.224535 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.224547 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.224565 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.224578 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:02Z","lastTransitionTime":"2026-02-16T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.230663 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:02 crc kubenswrapper[4698]: E0216 00:08:02.230868 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.288890 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 03:34:02.54607327 +0000 UTC Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.326960 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.327000 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.327014 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.327033 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.327045 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:02Z","lastTransitionTime":"2026-02-16T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.430239 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.430577 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.430687 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.430775 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.430840 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:02Z","lastTransitionTime":"2026-02-16T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.533711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.534248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.534449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.534684 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.534864 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:02Z","lastTransitionTime":"2026-02-16T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.637791 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.638167 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.638371 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.638540 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.638660 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:02Z","lastTransitionTime":"2026-02-16T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.741675 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.741725 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.741740 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.741760 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.741773 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:02Z","lastTransitionTime":"2026-02-16T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.845906 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.845958 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.845971 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.845992 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.846011 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:02Z","lastTransitionTime":"2026-02-16T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.948350 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.948406 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.948422 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.948445 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:02 crc kubenswrapper[4698]: I0216 00:08:02.948462 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:02Z","lastTransitionTime":"2026-02-16T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.052020 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.052089 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.052114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.052176 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.052201 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:03Z","lastTransitionTime":"2026-02-16T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.155749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.155831 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.155856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.155895 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.155915 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:03Z","lastTransitionTime":"2026-02-16T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.230986 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.231120 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:03 crc kubenswrapper[4698]: E0216 00:08:03.231180 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.231215 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:03 crc kubenswrapper[4698]: E0216 00:08:03.231371 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:03 crc kubenswrapper[4698]: E0216 00:08:03.231679 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.259282 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.259333 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.259347 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.259366 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.259378 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:03Z","lastTransitionTime":"2026-02-16T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.289800 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 03:25:14.003036179 +0000 UTC Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.363183 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.363229 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.363243 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.363265 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.363281 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:03Z","lastTransitionTime":"2026-02-16T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.469153 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.469210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.469230 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.469255 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.469272 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:03Z","lastTransitionTime":"2026-02-16T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.573096 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.573168 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.573190 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.573221 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.573241 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:03Z","lastTransitionTime":"2026-02-16T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.676942 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.677010 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.677040 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.677074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.677095 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:03Z","lastTransitionTime":"2026-02-16T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.779837 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.779911 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.779938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.779967 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.779986 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:03Z","lastTransitionTime":"2026-02-16T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.884856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.884934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.884956 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.884985 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.885003 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:03Z","lastTransitionTime":"2026-02-16T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.989770 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.989845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.989863 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.989894 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:03 crc kubenswrapper[4698]: I0216 00:08:03.989912 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:03Z","lastTransitionTime":"2026-02-16T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.094183 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.094235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.094246 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.094267 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.094282 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:04Z","lastTransitionTime":"2026-02-16T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.197789 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.197858 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.197876 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.197902 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.197921 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:04Z","lastTransitionTime":"2026-02-16T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.231409 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:04 crc kubenswrapper[4698]: E0216 00:08:04.231673 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.233394 4698 scope.go:117] "RemoveContainer" containerID="651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27" Feb 16 00:08:04 crc kubenswrapper[4698]: E0216 00:08:04.234287 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.290171 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 07:47:54.715512942 +0000 UTC Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.301360 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.301406 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.301419 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.301438 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.301450 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:04Z","lastTransitionTime":"2026-02-16T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.404384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.404458 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.404478 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.404506 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.404529 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:04Z","lastTransitionTime":"2026-02-16T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.508061 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.508135 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.508160 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.508192 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.508215 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:04Z","lastTransitionTime":"2026-02-16T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.611577 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.611682 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.611702 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.611733 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.611750 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:04Z","lastTransitionTime":"2026-02-16T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.714845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.714922 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.714941 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.714968 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.714987 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:04Z","lastTransitionTime":"2026-02-16T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.818030 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.818104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.818127 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.818162 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.818185 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:04Z","lastTransitionTime":"2026-02-16T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.921574 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.921643 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.921662 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.921685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:04 crc kubenswrapper[4698]: I0216 00:08:04.921724 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:04Z","lastTransitionTime":"2026-02-16T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.025105 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.025299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.025331 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.025363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.025382 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:05Z","lastTransitionTime":"2026-02-16T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.128658 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.128761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.128780 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.128807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.128827 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:05Z","lastTransitionTime":"2026-02-16T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.231077 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.231166 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.231259 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:05 crc kubenswrapper[4698]: E0216 00:08:05.231499 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:05 crc kubenswrapper[4698]: E0216 00:08:05.231764 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:05 crc kubenswrapper[4698]: E0216 00:08:05.231976 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.232124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.232181 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.232237 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.232261 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.232282 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:05Z","lastTransitionTime":"2026-02-16T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.290954 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 13:55:51.856984623 +0000 UTC Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.336130 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.336224 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.336250 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.336285 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.336308 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:05Z","lastTransitionTime":"2026-02-16T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.439673 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.439736 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.439755 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.439783 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.439801 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:05Z","lastTransitionTime":"2026-02-16T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.542997 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.543051 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.543073 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.543099 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.543116 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:05Z","lastTransitionTime":"2026-02-16T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.646708 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.646776 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.646797 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.646829 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.646848 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:05Z","lastTransitionTime":"2026-02-16T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.750166 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.750241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.750265 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.750299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.750328 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:05Z","lastTransitionTime":"2026-02-16T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.853354 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.853474 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.853513 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.853547 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.853568 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:05Z","lastTransitionTime":"2026-02-16T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.956918 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.957440 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.957605 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.957855 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:05 crc kubenswrapper[4698]: I0216 00:08:05.958002 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:05Z","lastTransitionTime":"2026-02-16T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.061982 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.062193 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.062226 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.062261 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.062287 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:06Z","lastTransitionTime":"2026-02-16T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.166536 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.166649 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.166677 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.166711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.166737 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:06Z","lastTransitionTime":"2026-02-16T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.231244 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:06 crc kubenswrapper[4698]: E0216 00:08:06.231464 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.270565 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.270699 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.270730 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.270762 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.270781 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:06Z","lastTransitionTime":"2026-02-16T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.291841 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 16:44:20.576012642 +0000 UTC Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.374261 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.374320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.374338 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.374364 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.374384 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:06Z","lastTransitionTime":"2026-02-16T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.477130 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.477199 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.477222 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.477248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.477265 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:06Z","lastTransitionTime":"2026-02-16T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.580809 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.580901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.580925 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.580960 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.580983 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:06Z","lastTransitionTime":"2026-02-16T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.684219 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.684269 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.684283 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.684301 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.684313 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:06Z","lastTransitionTime":"2026-02-16T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.788436 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.788497 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.788517 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.788544 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.788564 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:06Z","lastTransitionTime":"2026-02-16T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.891769 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.891861 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.891882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.891907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.891923 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:06Z","lastTransitionTime":"2026-02-16T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.996249 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.996316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.996340 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.996373 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:06 crc kubenswrapper[4698]: I0216 00:08:06.996392 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:06Z","lastTransitionTime":"2026-02-16T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.099803 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.099866 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.099904 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.099945 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.099968 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:07Z","lastTransitionTime":"2026-02-16T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.203703 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.203778 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.203795 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.203817 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.203833 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:07Z","lastTransitionTime":"2026-02-16T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.231421 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.231523 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.231665 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:07 crc kubenswrapper[4698]: E0216 00:08:07.231724 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:07 crc kubenswrapper[4698]: E0216 00:08:07.231906 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:07 crc kubenswrapper[4698]: E0216 00:08:07.232089 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.292025 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 16:03:10.168502869 +0000 UTC Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.306672 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.306760 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.306783 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.306811 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.306831 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:07Z","lastTransitionTime":"2026-02-16T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.409798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.409892 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.409906 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.409929 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.409945 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:07Z","lastTransitionTime":"2026-02-16T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.513604 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.513725 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.513749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.513781 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.513800 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:07Z","lastTransitionTime":"2026-02-16T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.618238 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.618313 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.618335 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.618363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.618383 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:07Z","lastTransitionTime":"2026-02-16T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.722589 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.722694 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.722718 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.722749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.722770 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:07Z","lastTransitionTime":"2026-02-16T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.826980 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.827056 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.827081 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.827119 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.827145 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:07Z","lastTransitionTime":"2026-02-16T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.931385 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.931467 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.931487 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.931517 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:07 crc kubenswrapper[4698]: I0216 00:08:07.931537 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:07Z","lastTransitionTime":"2026-02-16T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.035779 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.035843 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.035863 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.035895 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.035917 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:08Z","lastTransitionTime":"2026-02-16T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.139099 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.139176 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.139196 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.139231 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.139250 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:08Z","lastTransitionTime":"2026-02-16T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.231572 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:08 crc kubenswrapper[4698]: E0216 00:08:08.232241 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.242085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.242139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.242158 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.242186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.242205 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:08Z","lastTransitionTime":"2026-02-16T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.292802 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 18:21:53.233331418 +0000 UTC Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.345267 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.345308 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.345326 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.345362 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.345381 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:08Z","lastTransitionTime":"2026-02-16T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.448605 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.448691 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.448704 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.448726 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.448741 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:08Z","lastTransitionTime":"2026-02-16T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.553229 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.553308 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.553329 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.553361 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.553380 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:08Z","lastTransitionTime":"2026-02-16T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.657828 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.657902 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.657925 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.657954 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.657975 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:08Z","lastTransitionTime":"2026-02-16T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.766196 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.766823 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.766886 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.766928 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.766953 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:08Z","lastTransitionTime":"2026-02-16T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.870850 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.870905 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.870924 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.870948 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.870965 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:08Z","lastTransitionTime":"2026-02-16T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.974563 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.974647 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.974661 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.974680 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:08 crc kubenswrapper[4698]: I0216 00:08:08.974693 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:08Z","lastTransitionTime":"2026-02-16T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.078606 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.078719 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.078742 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.078771 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.078793 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:09Z","lastTransitionTime":"2026-02-16T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.183069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.183135 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.183171 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.183204 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.183223 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:09Z","lastTransitionTime":"2026-02-16T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.230784 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.230910 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:09 crc kubenswrapper[4698]: E0216 00:08:09.230980 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.230799 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:09 crc kubenswrapper[4698]: E0216 00:08:09.231295 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:09 crc kubenswrapper[4698]: E0216 00:08:09.231328 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.287462 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.287531 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.287551 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.287579 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.287598 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:09Z","lastTransitionTime":"2026-02-16T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.293674 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:40:42.099673876 +0000 UTC Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.390761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.390807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.390822 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.390842 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.390855 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:09Z","lastTransitionTime":"2026-02-16T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.493805 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.493878 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.493906 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.493939 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.493963 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:09Z","lastTransitionTime":"2026-02-16T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.597514 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.598016 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.598244 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.598439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.598596 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:09Z","lastTransitionTime":"2026-02-16T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.703377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.703447 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.703468 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.703497 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.703516 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:09Z","lastTransitionTime":"2026-02-16T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.807099 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.807167 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.807186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.807212 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.807229 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:09Z","lastTransitionTime":"2026-02-16T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.815293 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.815361 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.815381 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.815407 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.815426 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:09Z","lastTransitionTime":"2026-02-16T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:09 crc kubenswrapper[4698]: E0216 00:08:09.843266 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.850275 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.850329 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.850349 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.850382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.850402 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:09Z","lastTransitionTime":"2026-02-16T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:09 crc kubenswrapper[4698]: E0216 00:08:09.874006 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.880544 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.880598 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.880652 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.880692 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.880711 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:09Z","lastTransitionTime":"2026-02-16T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:09 crc kubenswrapper[4698]: E0216 00:08:09.902131 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.908785 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.909074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.909278 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.909466 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.909702 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:09Z","lastTransitionTime":"2026-02-16T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:09 crc kubenswrapper[4698]: E0216 00:08:09.932073 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.938345 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.938391 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.938405 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.938428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.938442 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:09Z","lastTransitionTime":"2026-02-16T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:09 crc kubenswrapper[4698]: E0216 00:08:09.955849 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5fc85dad-076c-40e5-8031-b86a3144865b\\\",\\\"systemUUID\\\":\\\"77fc4cfc-d7c7-4ed3-bf37-2fa790a9cc57\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:09 crc kubenswrapper[4698]: E0216 00:08:09.956078 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.958748 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.958801 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.958816 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.958836 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:09 crc kubenswrapper[4698]: I0216 00:08:09.958853 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:09Z","lastTransitionTime":"2026-02-16T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.062473 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.063271 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.063351 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.063393 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.063421 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:10Z","lastTransitionTime":"2026-02-16T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.167434 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.167543 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.167569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.167604 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.167662 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:10Z","lastTransitionTime":"2026-02-16T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.231226 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:10 crc kubenswrapper[4698]: E0216 00:08:10.231407 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.271098 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.271170 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.271189 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.271216 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.271236 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:10Z","lastTransitionTime":"2026-02-16T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.294055 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 21:31:25.260791224 +0000 UTC Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.374382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.374461 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.374480 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.374510 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.374532 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:10Z","lastTransitionTime":"2026-02-16T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.478700 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.478761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.478781 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.478807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.478827 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:10Z","lastTransitionTime":"2026-02-16T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.583950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.584012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.584031 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.584059 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.584080 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:10Z","lastTransitionTime":"2026-02-16T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.689281 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.689395 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.689416 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.689445 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.689463 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:10Z","lastTransitionTime":"2026-02-16T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.793163 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.793235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.793249 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.793274 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.793291 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:10Z","lastTransitionTime":"2026-02-16T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.896003 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.896050 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.896063 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.896082 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.896095 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:10Z","lastTransitionTime":"2026-02-16T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.998684 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.998737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.998752 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.998773 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:10 crc kubenswrapper[4698]: I0216 00:08:10.998786 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:10Z","lastTransitionTime":"2026-02-16T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.101762 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.101823 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.101835 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.101855 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.101867 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:11Z","lastTransitionTime":"2026-02-16T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.205881 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.205941 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.205960 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.205984 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.206001 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:11Z","lastTransitionTime":"2026-02-16T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.231247 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:11 crc kubenswrapper[4698]: E0216 00:08:11.231455 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.231535 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:11 crc kubenswrapper[4698]: E0216 00:08:11.231742 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.231932 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:11 crc kubenswrapper[4698]: E0216 00:08:11.232074 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.263127 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bcf0de-de41-412b-9346-db3871696ed0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8df4138138dda7ac442096518ff19f53f11fc768008e2add3d68c19503a9cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e589739501cc00a7cdaec5ed578a1335ee95694e83f249b11cf6b3cffd9035bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b7a511e16f05c903dd1bf3ef5a17b7aa57145f11e11af4f410ea4e0e8c7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e71ddfefd00d1bc6035005cc26fea264225ccc361b78343c44aa9f5ad02a77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d8e70ae6fa368172b820fd9b1a7a9d694f396234ff5ee70490334c85c846f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61e953f6b1fbc9cb0c0665dfd373873f1b0a727a7f45f5044d0cac3de3068489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63549a8ab7ed6624291eab44bcee0acba016e8846a0ef5e64a528d317e0a021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d2f73d3a8997bbc0074e6a5a0db61d52e58bb95ad1f58ed8bf020a77066b7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.285840 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9h8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e45b5f-053d-4598-bdfc-cdf6903bf4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fec06440636536808a1e891a8b2a3a46ef72ebc02c2a50537f278548df689353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r77mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9h8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.295178 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 08:22:05.436248817 +0000 UTC Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.310011 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.310347 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.310441 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.310533 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.310870 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:11Z","lastTransitionTime":"2026-02-16T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.320982 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:49Z\\\",\\\"message\\\":\\\"v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0216 00:07:49.201007 6720 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 00:07:49.200759 6720 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 1.953741ms\\\\nI0216 00:07:49.201008 6720 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0216 00:07:49.201027 6720 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 00:07:49.201033 6720 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nF0216 00:07:49.201039 6720 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Int\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbd26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmrt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.342850 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dv2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69838a3a-c20d-4770-b95f-ab85a265d53c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e92aaa262728b5ab9af6556d29d5558d08822fe1b06333269be4b1ed3a7abc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T00:07:45Z\\\",\\\"message\\\":\\\"2026-02-16T00:06:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_42ff4bc6-21b5-485c-bc8f-ad09ba5a20f8\\\\n2026-02-16T00:07:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_42ff4bc6-21b5-485c-bc8f-ad09ba5a20f8 to /host/opt/cni/bin/\\\\n2026-02-16T00:07:00Z [verbose] multus-daemon started\\\\n2026-02-16T00:07:00Z [verbose] Readiness Indicator file check\\\\n2026-02-16T00:07:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vsqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dv2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.356661 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87629f1e-d9d5-4302-a92a-f9ac3bad1707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fgr4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.367931 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"542df46c-6d93-45b6-bf8d-d54d4e9febfe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d514d688c0cca450cc92f9bc0c0c996bee10cf7d03b1c4b2e30d5afd36db85df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b755a5466b8fefa2667afc5e8cc0a6f22f583ce8800b10f8836d9957364d5315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b755a5466b8fefa2667afc5e8cc0a6f22f583ce8800b10f8836d9957364d5315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.381582 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a39b1-d242-40d4-8179-cf7b71aaeab8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60a206ea8b608682c6898dd9051903dcdf19e39c22d1adba760b43177b474a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2915589b18e0db91214ca20d06f488bbc04f6f8a83bd4ccaaf294f99fc4aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcedd751c83fbf60506332eb79ff3d8e7bfd67099c0bcf36b1acfff96b35bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0fa66a365e509799863c36f84750804467de502e48ca1b9eac438ec31b0520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.393360 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707f1828dabf3148d8478f6e9d2973ba08c6dde5852dd6bc8f3a59e42d710b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2bd07fca541edb75e00596141df3ea800a578fb683846aa749ef5e557a6f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.412443 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.412489 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.412501 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.412520 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.412533 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:11Z","lastTransitionTime":"2026-02-16T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.413830 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69861357b4916fdcbd3b1c85ace4c1742b8681a9899b04e678ff511e17e55cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.426230 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b351654-277f-4d0d-84f9-b003f934936c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f17525ed4fddd297a96f79a56d61bb49a859ca3dfa9fecad63f15aeba0c3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5rp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z56m2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.440067 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.452040 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.463894 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-256rr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24906ea4-6ef0-4686-a810-4f6da05061f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://645c7b88260df637b7fd1281481e001e57f50236a7dd94231081a6d7b781acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9rrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-256rr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.480251 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7a940-fd0a-4b48-8cdd-086dd7ef42eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b9f244b951d5239f9c5d76101b18f4414a38bc0474f502dd70e3dd0eed00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e24a691e92de7fbfd6fad27ff1c960aa48b9a3fb8be9fd9c03065c360fe3d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mblf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckgrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.498226 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca29dec-0622-4036-b9b7-9d1ab05147df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:06:45Z\\\",\\\"message\\\":\\\"W0216 00:06:34.732884 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 00:06:34.733299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771200394 cert, and key in /tmp/serving-cert-1215333672/serving-signer.crt, /tmp/serving-cert-1215333672/serving-signer.key\\\\nI0216 00:06:35.136494 1 observer_polling.go:159] Starting file observer\\\\nW0216 00:06:35.139230 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 00:06:35.139461 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 00:06:35.141017 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1215333672/tls.crt::/tmp/serving-cert-1215333672/tls.key\\\\\\\"\\\\nF0216 00:06:45.429075 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.516124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.516171 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.516183 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.516206 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.516226 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:11Z","lastTransitionTime":"2026-02-16T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.517385 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b0dfe1-3eb5-4e37-bedf-302422793856\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd60ec21fe5d3c2a0ea38bbfb1dd6474562470ea86da30580f1a393e6a2214b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baf183e2003107676beefad313760154246532d06d5b6b13a83b3385904fafe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cc23b239fa5c101af53d316d2e101ab87f35e71f526870b91f1e2d59ade974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.531335 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9a5c48b6ef629201b833fc871ba75322713280df8affb4390a9c232b8d6440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.543610 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.558815 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c8dc67-ba64-4599-a153-2e1b9b6627b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb397f91235c8a4c3afb6891e5b1a83189fb381b064ac3701161db41434f6e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://989a1d8c852878be863c50c136e06b0d53e1877a491eb4ce025b6aa80ed14ed7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c246a6ca50e7d21f3c81efeb60526b5b07ee1aa43c6e429a5fcb3893c291f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://282daa582c09173b3276728c43846b9c403c8dd94ce6c6779065f30611e43a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6157bc41c791b0d997a2ab1ffd6e4ba9c96438028aa9d778e94857eaa4dd3ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49ebbe568bd83ae664ef046a6da8b5c41a28e160ce4e3b4233bf79514f09d14f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b532e2242bd0e7cdef6b4605a29c8611dec9e8c103b5b3c8244dad62c737c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2rgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:06:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rs8xm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.619909 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.619975 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.619995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.620020 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.620037 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:11Z","lastTransitionTime":"2026-02-16T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.723312 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.723440 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.723466 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.723498 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.723522 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:11Z","lastTransitionTime":"2026-02-16T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.825914 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.826014 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.826034 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.826060 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.826078 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:11Z","lastTransitionTime":"2026-02-16T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.929182 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.929240 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.929256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.929283 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:11 crc kubenswrapper[4698]: I0216 00:08:11.929301 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:11Z","lastTransitionTime":"2026-02-16T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.032513 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.032576 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.032594 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.032650 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.032670 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:12Z","lastTransitionTime":"2026-02-16T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.135728 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.135787 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.135800 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.135823 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.135836 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:12Z","lastTransitionTime":"2026-02-16T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.230642 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:12 crc kubenswrapper[4698]: E0216 00:08:12.230776 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.238770 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.238826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.238844 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.238868 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.238884 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:12Z","lastTransitionTime":"2026-02-16T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.297033 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 08:01:20.216386333 +0000 UTC Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.341729 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.341766 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.341776 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.341792 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.341802 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:12Z","lastTransitionTime":"2026-02-16T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.445505 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.445541 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.445553 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.445568 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.445577 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:12Z","lastTransitionTime":"2026-02-16T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.549601 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.549727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.549747 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.549774 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.549794 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:12Z","lastTransitionTime":"2026-02-16T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.653397 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.653822 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.654010 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.654139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.654259 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:12Z","lastTransitionTime":"2026-02-16T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.758036 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.758101 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.758121 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.758146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.758166 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:12Z","lastTransitionTime":"2026-02-16T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.861942 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.862033 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.862058 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.862095 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.862120 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:12Z","lastTransitionTime":"2026-02-16T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.965335 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.965410 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.965425 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.965454 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:12 crc kubenswrapper[4698]: I0216 00:08:12.965473 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:12Z","lastTransitionTime":"2026-02-16T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.069364 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.069429 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.069444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.069470 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.069489 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:13Z","lastTransitionTime":"2026-02-16T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.173133 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.173207 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.173236 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.173283 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.173307 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:13Z","lastTransitionTime":"2026-02-16T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.237040 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:13 crc kubenswrapper[4698]: E0216 00:08:13.237304 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.237517 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.237814 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:13 crc kubenswrapper[4698]: E0216 00:08:13.237814 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:13 crc kubenswrapper[4698]: E0216 00:08:13.237961 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.278296 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.278356 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.278372 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.278396 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.278415 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:13Z","lastTransitionTime":"2026-02-16T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.297570 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:51:03.629043538 +0000 UTC Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.381870 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.381938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.381956 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.381983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.382004 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:13Z","lastTransitionTime":"2026-02-16T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.485589 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.485657 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.485671 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.485688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.485700 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:13Z","lastTransitionTime":"2026-02-16T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.588584 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.588673 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.588688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.588711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.588729 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:13Z","lastTransitionTime":"2026-02-16T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.692197 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.692386 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.692410 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.692441 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.692462 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:13Z","lastTransitionTime":"2026-02-16T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.795001 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.795062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.795080 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.795105 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.795123 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:13Z","lastTransitionTime":"2026-02-16T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.898899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.898964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.898983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.899013 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:13 crc kubenswrapper[4698]: I0216 00:08:13.899032 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:13Z","lastTransitionTime":"2026-02-16T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.002745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.002832 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.002862 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.002893 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.002915 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:14Z","lastTransitionTime":"2026-02-16T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.106900 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.106973 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.106995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.107025 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.107044 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:14Z","lastTransitionTime":"2026-02-16T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.209796 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.210248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.210407 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.210588 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.210819 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:14Z","lastTransitionTime":"2026-02-16T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.231227 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:14 crc kubenswrapper[4698]: E0216 00:08:14.231488 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.297862 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 10:46:00.533706895 +0000 UTC Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.314286 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.314324 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.314334 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.314352 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.314367 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:14Z","lastTransitionTime":"2026-02-16T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.417525 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.417720 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.417750 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.417777 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.417795 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:14Z","lastTransitionTime":"2026-02-16T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.520897 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.520935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.520944 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.520960 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.520969 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:14Z","lastTransitionTime":"2026-02-16T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.623108 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.623149 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.623160 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.623176 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.623186 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:14Z","lastTransitionTime":"2026-02-16T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.726113 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.726194 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.726213 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.726239 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.726258 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:14Z","lastTransitionTime":"2026-02-16T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.829378 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.829421 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.829431 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.829447 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.829457 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:14Z","lastTransitionTime":"2026-02-16T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.932490 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.932545 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.932559 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.932581 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:14 crc kubenswrapper[4698]: I0216 00:08:14.932594 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:14Z","lastTransitionTime":"2026-02-16T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.036291 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.036360 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.036383 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.036413 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.036432 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:15Z","lastTransitionTime":"2026-02-16T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.139753 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.139830 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.139856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.139891 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.139918 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:15Z","lastTransitionTime":"2026-02-16T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.231571 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.231600 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.231660 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:15 crc kubenswrapper[4698]: E0216 00:08:15.232513 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:15 crc kubenswrapper[4698]: E0216 00:08:15.232703 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.232762 4698 scope.go:117] "RemoveContainer" containerID="651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27" Feb 16 00:08:15 crc kubenswrapper[4698]: E0216 00:08:15.232827 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:15 crc kubenswrapper[4698]: E0216 00:08:15.232932 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.243019 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.243091 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.243110 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.243140 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.243159 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:15Z","lastTransitionTime":"2026-02-16T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.298638 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:59:29.163819009 +0000 UTC Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.346368 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.346427 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.346445 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.346472 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.346489 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:15Z","lastTransitionTime":"2026-02-16T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.449850 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.449918 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.449942 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.449973 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.449995 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:15Z","lastTransitionTime":"2026-02-16T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.553406 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.553479 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.553515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.553546 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.553566 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:15Z","lastTransitionTime":"2026-02-16T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.656308 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.656363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.656377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.656398 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.656411 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:15Z","lastTransitionTime":"2026-02-16T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.758850 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.758897 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.758910 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.758931 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.758945 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:15Z","lastTransitionTime":"2026-02-16T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.861399 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.861480 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.861506 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.861541 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.861565 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:15Z","lastTransitionTime":"2026-02-16T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.964141 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.964192 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.964210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.964232 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:15 crc kubenswrapper[4698]: I0216 00:08:15.964245 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:15Z","lastTransitionTime":"2026-02-16T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.067606 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.067721 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.067742 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.067772 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.067797 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:16Z","lastTransitionTime":"2026-02-16T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.171404 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.171467 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.171485 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.171508 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.171520 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:16Z","lastTransitionTime":"2026-02-16T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.230951 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:16 crc kubenswrapper[4698]: E0216 00:08:16.231113 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.274954 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.275038 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.275066 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.275100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.275121 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:16Z","lastTransitionTime":"2026-02-16T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.299559 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 07:04:45.93495099 +0000 UTC Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.379162 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.379227 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.379247 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.379273 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.379291 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:16Z","lastTransitionTime":"2026-02-16T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.483038 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.483110 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.483136 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.483305 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.483360 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:16Z","lastTransitionTime":"2026-02-16T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.587838 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.587919 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.587940 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.587965 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.587985 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:16Z","lastTransitionTime":"2026-02-16T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.691715 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.691804 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.691824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.691856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.691877 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:16Z","lastTransitionTime":"2026-02-16T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.780166 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs\") pod \"network-metrics-daemon-fgr4f\" (UID: \"87629f1e-d9d5-4302-a92a-f9ac3bad1707\") " pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:16 crc kubenswrapper[4698]: E0216 00:08:16.780535 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:08:16 crc kubenswrapper[4698]: E0216 00:08:16.780781 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs podName:87629f1e-d9d5-4302-a92a-f9ac3bad1707 nodeName:}" failed. No retries permitted until 2026-02-16 00:09:20.780727873 +0000 UTC m=+170.438626845 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs") pod "network-metrics-daemon-fgr4f" (UID: "87629f1e-d9d5-4302-a92a-f9ac3bad1707") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.794849 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.794915 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.794939 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.794969 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.794989 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:16Z","lastTransitionTime":"2026-02-16T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.898158 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.898224 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.898242 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.898272 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:16 crc kubenswrapper[4698]: I0216 00:08:16.898293 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:16Z","lastTransitionTime":"2026-02-16T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.002552 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.002647 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.002667 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.002695 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.002717 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:17Z","lastTransitionTime":"2026-02-16T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.106015 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.106077 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.106098 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.106131 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.106153 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:17Z","lastTransitionTime":"2026-02-16T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.209224 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.209271 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.209283 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.209301 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.209313 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:17Z","lastTransitionTime":"2026-02-16T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.231934 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.231976 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:17 crc kubenswrapper[4698]: E0216 00:08:17.232112 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.232169 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:17 crc kubenswrapper[4698]: E0216 00:08:17.232427 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:17 crc kubenswrapper[4698]: E0216 00:08:17.232514 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.300549 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 04:46:23.603984659 +0000 UTC Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.312865 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.312902 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.312911 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.312927 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.312935 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:17Z","lastTransitionTime":"2026-02-16T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.415831 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.415891 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.415907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.415931 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.415944 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:17Z","lastTransitionTime":"2026-02-16T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.518669 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.518736 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.518748 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.518765 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.518777 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:17Z","lastTransitionTime":"2026-02-16T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.622047 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.622097 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.622109 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.622128 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.622139 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:17Z","lastTransitionTime":"2026-02-16T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.726045 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.726101 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.726114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.726138 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.726151 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:17Z","lastTransitionTime":"2026-02-16T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.829337 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.829395 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.829412 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.829436 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.829454 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:17Z","lastTransitionTime":"2026-02-16T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.932782 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.932861 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.932885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.932918 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:17 crc kubenswrapper[4698]: I0216 00:08:17.932944 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:17Z","lastTransitionTime":"2026-02-16T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.037245 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.037316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.037344 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.037375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.037397 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:18Z","lastTransitionTime":"2026-02-16T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.140342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.140410 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.140430 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.140446 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.140456 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:18Z","lastTransitionTime":"2026-02-16T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.230874 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:18 crc kubenswrapper[4698]: E0216 00:08:18.231021 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.242403 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.242429 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.242441 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.242454 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.242464 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:18Z","lastTransitionTime":"2026-02-16T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.301522 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:24:45.127919196 +0000 UTC Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.345223 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.345277 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.345290 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.345312 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.345326 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:18Z","lastTransitionTime":"2026-02-16T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.448399 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.448472 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.448498 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.448531 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.448556 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:18Z","lastTransitionTime":"2026-02-16T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.550904 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.550947 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.550958 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.550977 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.550993 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:18Z","lastTransitionTime":"2026-02-16T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.653387 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.653431 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.653444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.653463 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.653473 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:18Z","lastTransitionTime":"2026-02-16T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.756273 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.756323 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.756334 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.756350 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.756360 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:18Z","lastTransitionTime":"2026-02-16T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.859353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.859448 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.859462 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.859501 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.859514 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:18Z","lastTransitionTime":"2026-02-16T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.962941 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.963031 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.963045 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.963064 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:18 crc kubenswrapper[4698]: I0216 00:08:18.963076 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:18Z","lastTransitionTime":"2026-02-16T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.065311 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.065361 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.065371 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.065390 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.065400 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:19Z","lastTransitionTime":"2026-02-16T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.167331 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.167362 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.167374 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.167388 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.167397 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:19Z","lastTransitionTime":"2026-02-16T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.231188 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.231223 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.231188 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:19 crc kubenswrapper[4698]: E0216 00:08:19.231349 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:19 crc kubenswrapper[4698]: E0216 00:08:19.231591 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:19 crc kubenswrapper[4698]: E0216 00:08:19.231777 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.269645 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.269687 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.269698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.269718 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.269728 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:19Z","lastTransitionTime":"2026-02-16T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.302605 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:09:07.84211071 +0000 UTC Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.372223 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.372262 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.372274 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.372294 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.372309 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:19Z","lastTransitionTime":"2026-02-16T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.474959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.475020 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.475041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.475065 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.475083 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:19Z","lastTransitionTime":"2026-02-16T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.577548 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.577948 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.578066 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.578205 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.578365 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:19Z","lastTransitionTime":"2026-02-16T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.681343 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.681764 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.681929 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.682055 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.682169 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:19Z","lastTransitionTime":"2026-02-16T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.785004 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.785314 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.785388 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.785459 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.785542 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:19Z","lastTransitionTime":"2026-02-16T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.888507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.888584 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.888603 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.888674 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.888693 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:19Z","lastTransitionTime":"2026-02-16T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.992306 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.992798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.993012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.993497 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:19 crc kubenswrapper[4698]: I0216 00:08:19.993793 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:19Z","lastTransitionTime":"2026-02-16T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.097229 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.097321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.097349 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.097406 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.097432 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:20Z","lastTransitionTime":"2026-02-16T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.167350 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.167423 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.167444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.167472 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.167490 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:08:20Z","lastTransitionTime":"2026-02-16T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.231571 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:20 crc kubenswrapper[4698]: E0216 00:08:20.231760 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.235729 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf"] Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.236207 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.238679 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.238856 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.239078 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.241339 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.287093 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-256rr" podStartSLOduration=83.287072535 podStartE2EDuration="1m23.287072535s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:20.286735974 +0000 UTC m=+109.944634756" watchObservedRunningTime="2026-02-16 00:08:20.287072535 +0000 UTC m=+109.944971297" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.303235 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:48:54.116315542 +0000 UTC Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.303340 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.312058 4698 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.316830 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.316810684 podStartE2EDuration="1m29.316810684s" podCreationTimestamp="2026-02-16 00:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:20.31637702 +0000 UTC m=+109.974275792" watchObservedRunningTime="2026-02-16 00:08:20.316810684 +0000 UTC m=+109.974709456" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.321231 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd02b6fd-ff60-4b68-aee3-497e1633bd22-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-88ppf\" (UID: \"fd02b6fd-ff60-4b68-aee3-497e1633bd22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.321525 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fd02b6fd-ff60-4b68-aee3-497e1633bd22-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-88ppf\" (UID: \"fd02b6fd-ff60-4b68-aee3-497e1633bd22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.321690 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fd02b6fd-ff60-4b68-aee3-497e1633bd22-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-88ppf\" (UID: \"fd02b6fd-ff60-4b68-aee3-497e1633bd22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.321882 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fd02b6fd-ff60-4b68-aee3-497e1633bd22-service-ca\") pod \"cluster-version-operator-5c965bbfc6-88ppf\" (UID: \"fd02b6fd-ff60-4b68-aee3-497e1633bd22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.322055 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd02b6fd-ff60-4b68-aee3-497e1633bd22-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-88ppf\" (UID: \"fd02b6fd-ff60-4b68-aee3-497e1633bd22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.353299 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=82.353275303 podStartE2EDuration="1m22.353275303s" podCreationTimestamp="2026-02-16 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:20.334608894 +0000 UTC m=+109.992507666" watchObservedRunningTime="2026-02-16 00:08:20.353275303 +0000 UTC m=+110.011174075" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.386350 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rs8xm" podStartSLOduration=83.386326186 podStartE2EDuration="1m23.386326186s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:20.385170109 +0000 UTC m=+110.043068911" watchObservedRunningTime="2026-02-16 00:08:20.386326186 +0000 UTC m=+110.044224958" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.422715 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fd02b6fd-ff60-4b68-aee3-497e1633bd22-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-88ppf\" (UID: \"fd02b6fd-ff60-4b68-aee3-497e1633bd22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.422859 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fd02b6fd-ff60-4b68-aee3-497e1633bd22-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-88ppf\" (UID: \"fd02b6fd-ff60-4b68-aee3-497e1633bd22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.423139 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fd02b6fd-ff60-4b68-aee3-497e1633bd22-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-88ppf\" (UID: \"fd02b6fd-ff60-4b68-aee3-497e1633bd22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.423320 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fd02b6fd-ff60-4b68-aee3-497e1633bd22-service-ca\") pod \"cluster-version-operator-5c965bbfc6-88ppf\" (UID: \"fd02b6fd-ff60-4b68-aee3-497e1633bd22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.423463 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd02b6fd-ff60-4b68-aee3-497e1633bd22-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-88ppf\" (UID: \"fd02b6fd-ff60-4b68-aee3-497e1633bd22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.423553 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd02b6fd-ff60-4b68-aee3-497e1633bd22-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-88ppf\" (UID: \"fd02b6fd-ff60-4b68-aee3-497e1633bd22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.423573 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=88.423556619 podStartE2EDuration="1m28.423556619s" podCreationTimestamp="2026-02-16 00:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:20.422780995 +0000 UTC m=+110.080679777" watchObservedRunningTime="2026-02-16 00:08:20.423556619 +0000 UTC m=+110.081455401" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.423744 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fd02b6fd-ff60-4b68-aee3-497e1633bd22-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-88ppf\" (UID: \"fd02b6fd-ff60-4b68-aee3-497e1633bd22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.424146 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckgrt" podStartSLOduration=82.424137918 podStartE2EDuration="1m22.424137918s" podCreationTimestamp="2026-02-16 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:20.399122549 +0000 UTC m=+110.057021321" watchObservedRunningTime="2026-02-16 00:08:20.424137918 +0000 UTC m=+110.082036700" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.425659 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fd02b6fd-ff60-4b68-aee3-497e1633bd22-service-ca\") pod \"cluster-version-operator-5c965bbfc6-88ppf\" (UID: \"fd02b6fd-ff60-4b68-aee3-497e1633bd22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.430856 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd02b6fd-ff60-4b68-aee3-497e1633bd22-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-88ppf\" (UID: \"fd02b6fd-ff60-4b68-aee3-497e1633bd22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.441014 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd02b6fd-ff60-4b68-aee3-497e1633bd22-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-88ppf\" (UID: \"fd02b6fd-ff60-4b68-aee3-497e1633bd22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.460209 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-m9h8s" podStartSLOduration=83.460190545 podStartE2EDuration="1m23.460190545s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:20.434213356 +0000 UTC m=+110.092112138" watchObservedRunningTime="2026-02-16 00:08:20.460190545 +0000 UTC m=+110.118089297" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.471766 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.471748439 podStartE2EDuration="21.471748439s" podCreationTimestamp="2026-02-16 00:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:20.471404178 +0000 UTC m=+110.129302980" watchObservedRunningTime="2026-02-16 00:08:20.471748439 +0000 UTC m=+110.129647201" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.487508 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=56.487485056 podStartE2EDuration="56.487485056s" podCreationTimestamp="2026-02-16 00:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:20.486529576 +0000 UTC m=+110.144428368" watchObservedRunningTime="2026-02-16 00:08:20.487485056 +0000 UTC m=+110.145383858" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.523365 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podStartSLOduration=83.523349067 podStartE2EDuration="1m23.523349067s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:20.523279945 +0000 UTC m=+110.181178717" watchObservedRunningTime="2026-02-16 00:08:20.523349067 +0000 UTC m=+110.181247829" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.537263 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2dv2d" podStartSLOduration=83.537231685 podStartE2EDuration="1m23.537231685s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:20.536350397 +0000 UTC m=+110.194249159" watchObservedRunningTime="2026-02-16 00:08:20.537231685 +0000 UTC m=+110.195130467" Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.559688 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" Feb 16 00:08:20 crc kubenswrapper[4698]: W0216 00:08:20.575507 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd02b6fd_ff60_4b68_aee3_497e1633bd22.slice/crio-55e0b4e580a64707b43c18a1ac1ac0afbcf41f87d293a5cdaa1f9d4db9e84480 WatchSource:0}: Error finding container 55e0b4e580a64707b43c18a1ac1ac0afbcf41f87d293a5cdaa1f9d4db9e84480: Status 404 returned error can't find the container with id 55e0b4e580a64707b43c18a1ac1ac0afbcf41f87d293a5cdaa1f9d4db9e84480 Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.883494 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" event={"ID":"fd02b6fd-ff60-4b68-aee3-497e1633bd22","Type":"ContainerStarted","Data":"f30c883f980fe0363762a3f6a761d62e687180b05e87c57b0d73508ab959527a"} Feb 16 00:08:20 crc kubenswrapper[4698]: I0216 00:08:20.883552 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" event={"ID":"fd02b6fd-ff60-4b68-aee3-497e1633bd22","Type":"ContainerStarted","Data":"55e0b4e580a64707b43c18a1ac1ac0afbcf41f87d293a5cdaa1f9d4db9e84480"} Feb 16 00:08:21 crc kubenswrapper[4698]: I0216 00:08:21.230994 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:21 crc kubenswrapper[4698]: I0216 00:08:21.231090 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:21 crc kubenswrapper[4698]: E0216 00:08:21.234015 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:21 crc kubenswrapper[4698]: I0216 00:08:21.234124 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:21 crc kubenswrapper[4698]: E0216 00:08:21.234272 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:21 crc kubenswrapper[4698]: E0216 00:08:21.234365 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:22 crc kubenswrapper[4698]: I0216 00:08:22.231042 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:22 crc kubenswrapper[4698]: E0216 00:08:22.231186 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:23 crc kubenswrapper[4698]: I0216 00:08:23.231359 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:23 crc kubenswrapper[4698]: I0216 00:08:23.231429 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:23 crc kubenswrapper[4698]: I0216 00:08:23.231498 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:23 crc kubenswrapper[4698]: E0216 00:08:23.231578 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:23 crc kubenswrapper[4698]: E0216 00:08:23.231713 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:23 crc kubenswrapper[4698]: E0216 00:08:23.231798 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:24 crc kubenswrapper[4698]: I0216 00:08:24.231527 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:24 crc kubenswrapper[4698]: E0216 00:08:24.232235 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:25 crc kubenswrapper[4698]: I0216 00:08:25.231277 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:25 crc kubenswrapper[4698]: I0216 00:08:25.231337 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:25 crc kubenswrapper[4698]: E0216 00:08:25.231464 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:25 crc kubenswrapper[4698]: I0216 00:08:25.231535 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:25 crc kubenswrapper[4698]: E0216 00:08:25.231569 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:25 crc kubenswrapper[4698]: E0216 00:08:25.231704 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:26 crc kubenswrapper[4698]: I0216 00:08:26.231351 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:26 crc kubenswrapper[4698]: E0216 00:08:26.232695 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:26 crc kubenswrapper[4698]: I0216 00:08:26.233270 4698 scope.go:117] "RemoveContainer" containerID="651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27" Feb 16 00:08:26 crc kubenswrapper[4698]: E0216 00:08:26.233584 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rmrt5_openshift-ovn-kubernetes(cea3368d-30b3-4bf5-8c91-a6b9c254eaf0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" Feb 16 00:08:27 crc kubenswrapper[4698]: I0216 00:08:27.231734 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:27 crc kubenswrapper[4698]: I0216 00:08:27.231762 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:27 crc kubenswrapper[4698]: E0216 00:08:27.231912 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:27 crc kubenswrapper[4698]: E0216 00:08:27.231992 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:27 crc kubenswrapper[4698]: I0216 00:08:27.232671 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:27 crc kubenswrapper[4698]: E0216 00:08:27.233072 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:28 crc kubenswrapper[4698]: I0216 00:08:28.231154 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:28 crc kubenswrapper[4698]: E0216 00:08:28.231444 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:29 crc kubenswrapper[4698]: I0216 00:08:29.231407 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:29 crc kubenswrapper[4698]: I0216 00:08:29.231429 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:29 crc kubenswrapper[4698]: E0216 00:08:29.231584 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:29 crc kubenswrapper[4698]: E0216 00:08:29.231754 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:29 crc kubenswrapper[4698]: I0216 00:08:29.231429 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:29 crc kubenswrapper[4698]: E0216 00:08:29.231928 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:30 crc kubenswrapper[4698]: I0216 00:08:30.231872 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:30 crc kubenswrapper[4698]: E0216 00:08:30.232756 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:31 crc kubenswrapper[4698]: I0216 00:08:31.232110 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:31 crc kubenswrapper[4698]: I0216 00:08:31.232351 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:31 crc kubenswrapper[4698]: I0216 00:08:31.233255 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:31 crc kubenswrapper[4698]: E0216 00:08:31.233546 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:31 crc kubenswrapper[4698]: E0216 00:08:31.233705 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:31 crc kubenswrapper[4698]: E0216 00:08:31.234433 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:31 crc kubenswrapper[4698]: E0216 00:08:31.238653 4698 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 16 00:08:31 crc kubenswrapper[4698]: E0216 00:08:31.409403 4698 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 00:08:31 crc kubenswrapper[4698]: I0216 00:08:31.933470 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dv2d_69838a3a-c20d-4770-b95f-ab85a265d53c/kube-multus/1.log" Feb 16 00:08:31 crc kubenswrapper[4698]: I0216 00:08:31.934252 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dv2d_69838a3a-c20d-4770-b95f-ab85a265d53c/kube-multus/0.log" Feb 16 00:08:31 crc kubenswrapper[4698]: I0216 00:08:31.934318 4698 generic.go:334] "Generic (PLEG): container finished" podID="69838a3a-c20d-4770-b95f-ab85a265d53c" containerID="0e92aaa262728b5ab9af6556d29d5558d08822fe1b06333269be4b1ed3a7abc2" exitCode=1 Feb 16 00:08:31 crc kubenswrapper[4698]: I0216 00:08:31.934363 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dv2d" event={"ID":"69838a3a-c20d-4770-b95f-ab85a265d53c","Type":"ContainerDied","Data":"0e92aaa262728b5ab9af6556d29d5558d08822fe1b06333269be4b1ed3a7abc2"} Feb 16 00:08:31 crc kubenswrapper[4698]: I0216 00:08:31.934414 4698 scope.go:117] "RemoveContainer" containerID="3c2f6081f957d2cf0d8197de1a3cd303080eeb59e86987787fc30d1315305ada" Feb 16 00:08:31 crc kubenswrapper[4698]: I0216 00:08:31.935198 4698 scope.go:117] "RemoveContainer" containerID="0e92aaa262728b5ab9af6556d29d5558d08822fe1b06333269be4b1ed3a7abc2" Feb 16 00:08:31 crc kubenswrapper[4698]: E0216 00:08:31.935591 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-2dv2d_openshift-multus(69838a3a-c20d-4770-b95f-ab85a265d53c)\"" pod="openshift-multus/multus-2dv2d" podUID="69838a3a-c20d-4770-b95f-ab85a265d53c" Feb 16 00:08:31 crc kubenswrapper[4698]: I0216 00:08:31.966641 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-88ppf" podStartSLOduration=94.966595781 podStartE2EDuration="1m34.966595781s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:20.905932213 +0000 UTC m=+110.563830975" watchObservedRunningTime="2026-02-16 00:08:31.966595781 +0000 UTC m=+121.624494533" Feb 16 00:08:32 crc kubenswrapper[4698]: I0216 00:08:32.231178 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:32 crc kubenswrapper[4698]: E0216 00:08:32.231348 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:32 crc kubenswrapper[4698]: I0216 00:08:32.940358 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dv2d_69838a3a-c20d-4770-b95f-ab85a265d53c/kube-multus/1.log" Feb 16 00:08:33 crc kubenswrapper[4698]: I0216 00:08:33.231164 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:33 crc kubenswrapper[4698]: E0216 00:08:33.231440 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:33 crc kubenswrapper[4698]: I0216 00:08:33.232014 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:33 crc kubenswrapper[4698]: I0216 00:08:33.232023 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:33 crc kubenswrapper[4698]: E0216 00:08:33.232141 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:33 crc kubenswrapper[4698]: E0216 00:08:33.232300 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:34 crc kubenswrapper[4698]: I0216 00:08:34.231557 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:34 crc kubenswrapper[4698]: E0216 00:08:34.231806 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:35 crc kubenswrapper[4698]: I0216 00:08:35.231401 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:35 crc kubenswrapper[4698]: I0216 00:08:35.231509 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:35 crc kubenswrapper[4698]: E0216 00:08:35.231576 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:35 crc kubenswrapper[4698]: E0216 00:08:35.231722 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:35 crc kubenswrapper[4698]: I0216 00:08:35.231535 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:35 crc kubenswrapper[4698]: E0216 00:08:35.231931 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:36 crc kubenswrapper[4698]: I0216 00:08:36.231432 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:36 crc kubenswrapper[4698]: E0216 00:08:36.231582 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:36 crc kubenswrapper[4698]: E0216 00:08:36.412348 4698 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 00:08:37 crc kubenswrapper[4698]: I0216 00:08:37.230830 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:37 crc kubenswrapper[4698]: I0216 00:08:37.230933 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:37 crc kubenswrapper[4698]: E0216 00:08:37.230980 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:37 crc kubenswrapper[4698]: E0216 00:08:37.231120 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:37 crc kubenswrapper[4698]: I0216 00:08:37.230831 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:37 crc kubenswrapper[4698]: E0216 00:08:37.231201 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:38 crc kubenswrapper[4698]: I0216 00:08:38.230958 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:38 crc kubenswrapper[4698]: E0216 00:08:38.231142 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:39 crc kubenswrapper[4698]: I0216 00:08:39.231521 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:39 crc kubenswrapper[4698]: I0216 00:08:39.231530 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:39 crc kubenswrapper[4698]: E0216 00:08:39.231837 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:39 crc kubenswrapper[4698]: I0216 00:08:39.231881 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:39 crc kubenswrapper[4698]: E0216 00:08:39.232031 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:39 crc kubenswrapper[4698]: E0216 00:08:39.232151 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:40 crc kubenswrapper[4698]: I0216 00:08:40.230813 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:40 crc kubenswrapper[4698]: E0216 00:08:40.231053 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:41 crc kubenswrapper[4698]: I0216 00:08:41.231972 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:41 crc kubenswrapper[4698]: I0216 00:08:41.231970 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:41 crc kubenswrapper[4698]: I0216 00:08:41.232063 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:41 crc kubenswrapper[4698]: E0216 00:08:41.234209 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:41 crc kubenswrapper[4698]: E0216 00:08:41.234394 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:41 crc kubenswrapper[4698]: E0216 00:08:41.234516 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:41 crc kubenswrapper[4698]: I0216 00:08:41.235815 4698 scope.go:117] "RemoveContainer" containerID="651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27" Feb 16 00:08:41 crc kubenswrapper[4698]: E0216 00:08:41.413572 4698 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 00:08:41 crc kubenswrapper[4698]: I0216 00:08:41.974368 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovnkube-controller/3.log" Feb 16 00:08:41 crc kubenswrapper[4698]: I0216 00:08:41.977608 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerStarted","Data":"db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e"} Feb 16 00:08:41 crc kubenswrapper[4698]: I0216 00:08:41.978156 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:08:42 crc kubenswrapper[4698]: I0216 00:08:42.015649 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" podStartSLOduration=105.015629887 podStartE2EDuration="1m45.015629887s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:42.014075207 +0000 UTC m=+131.671973979" watchObservedRunningTime="2026-02-16 00:08:42.015629887 +0000 UTC m=+131.673528649" Feb 16 00:08:42 crc kubenswrapper[4698]: I0216 00:08:42.231804 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:42 crc kubenswrapper[4698]: E0216 00:08:42.232035 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:42 crc kubenswrapper[4698]: I0216 00:08:42.359101 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-fgr4f"] Feb 16 00:08:42 crc kubenswrapper[4698]: I0216 00:08:42.981670 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:42 crc kubenswrapper[4698]: E0216 00:08:42.982416 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:43 crc kubenswrapper[4698]: I0216 00:08:43.231153 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:43 crc kubenswrapper[4698]: E0216 00:08:43.231338 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:43 crc kubenswrapper[4698]: I0216 00:08:43.231581 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:43 crc kubenswrapper[4698]: E0216 00:08:43.231673 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:43 crc kubenswrapper[4698]: I0216 00:08:43.231953 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:43 crc kubenswrapper[4698]: E0216 00:08:43.232153 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:43 crc kubenswrapper[4698]: I0216 00:08:43.232854 4698 scope.go:117] "RemoveContainer" containerID="0e92aaa262728b5ab9af6556d29d5558d08822fe1b06333269be4b1ed3a7abc2" Feb 16 00:08:43 crc kubenswrapper[4698]: I0216 00:08:43.988860 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dv2d_69838a3a-c20d-4770-b95f-ab85a265d53c/kube-multus/1.log" Feb 16 00:08:43 crc kubenswrapper[4698]: I0216 00:08:43.989312 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dv2d" event={"ID":"69838a3a-c20d-4770-b95f-ab85a265d53c","Type":"ContainerStarted","Data":"89b1308232f81e46ec49509566a9454686396ff65a1b76bf4537910414500054"} Feb 16 00:08:45 crc kubenswrapper[4698]: I0216 00:08:45.230746 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:45 crc kubenswrapper[4698]: I0216 00:08:45.230902 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:45 crc kubenswrapper[4698]: I0216 00:08:45.230926 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:45 crc kubenswrapper[4698]: E0216 00:08:45.231024 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 00:08:45 crc kubenswrapper[4698]: I0216 00:08:45.231148 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:45 crc kubenswrapper[4698]: E0216 00:08:45.231146 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 00:08:45 crc kubenswrapper[4698]: E0216 00:08:45.231233 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 00:08:45 crc kubenswrapper[4698]: E0216 00:08:45.231399 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgr4f" podUID="87629f1e-d9d5-4302-a92a-f9ac3bad1707" Feb 16 00:08:47 crc kubenswrapper[4698]: I0216 00:08:47.231080 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:47 crc kubenswrapper[4698]: I0216 00:08:47.231907 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:08:47 crc kubenswrapper[4698]: I0216 00:08:47.232572 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:47 crc kubenswrapper[4698]: I0216 00:08:47.232707 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:47 crc kubenswrapper[4698]: I0216 00:08:47.234738 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 00:08:47 crc kubenswrapper[4698]: I0216 00:08:47.234894 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 00:08:47 crc kubenswrapper[4698]: I0216 00:08:47.235019 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 00:08:47 crc kubenswrapper[4698]: I0216 00:08:47.235179 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 00:08:47 crc kubenswrapper[4698]: I0216 00:08:47.235261 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 00:08:47 crc kubenswrapper[4698]: I0216 00:08:47.235896 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 00:08:50 crc kubenswrapper[4698]: I0216 00:08:50.953005 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.013257 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-v8kwd"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.014869 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-v8kwd" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.026787 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.027047 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.027254 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.027580 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.027736 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.027867 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.030451 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-kkhhq"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.037984 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.046306 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rn7cx"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.046984 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.047870 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.048140 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.048419 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.049154 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.049603 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ckzvr"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.050597 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.059249 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9vkwc"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.059819 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-b76p7"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.060167 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bg97c"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.060585 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bg97c" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.061442 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.061891 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b76p7" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.072063 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bsd9j"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.073528 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bsd9j" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.074410 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.074541 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.074655 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.074754 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.074885 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.075192 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.075396 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.075412 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.075550 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.075726 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.075949 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.076102 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.076145 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.078182 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.078229 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.078383 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.076443 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.078486 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.077138 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.077170 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.078601 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.077252 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.077289 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.077377 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.077408 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.079049 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.079128 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.079191 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.079204 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.079256 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.079325 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.079331 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.079393 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.079437 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.079463 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.079508 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.079664 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.079765 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.079949 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.080104 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.080134 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.080276 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.080439 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.080511 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.080601 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.079789 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.081007 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.081220 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.081425 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.081639 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.081047 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.082307 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.085144 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29520000-k87pz"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.085867 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-x6shs"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.086314 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.087079 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29520000-k87pz" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.091762 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bhfmf"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.092374 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4j499"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.092588 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.092878 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4j499" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.093470 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.093904 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.094101 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bhfmf" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.101707 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.101987 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.102305 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.102436 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.123419 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.124224 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.124337 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.125800 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.126149 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.126403 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.126746 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.127134 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.127405 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.132732 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6wzm8"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.134637 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.136995 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.138235 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.138821 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.151879 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.151987 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.152321 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.152342 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.152420 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.152450 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.152596 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.153179 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.153373 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.155151 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.155357 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hbrff"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.156051 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-4qvcf"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.156353 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.156503 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.157263 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.157373 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.157444 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.157582 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.157601 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.157880 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4qvcf" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.159027 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g9bpw"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.159899 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g9bpw" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.160258 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.161268 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.166812 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t8g5k"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.167592 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.167708 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t8g5k" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.167969 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.169909 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.170394 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.170475 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.169957 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.170932 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.171136 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.173771 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.174390 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.197881 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.199093 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.199816 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.202056 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.202369 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-db2vn"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.204098 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-db2vn" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.204500 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t8z8b"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.204705 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/517bde6b-579b-4047-a627-315b3722d147-encryption-config\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.206194 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f3cf6a0-528b-4e38-9e30-f274f3caa4a4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4j499\" (UID: \"3f3cf6a0-528b-4e38-9e30-f274f3caa4a4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4j499" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.206316 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.206426 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-trusted-ca-bundle\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.206574 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/517bde6b-579b-4047-a627-315b3722d147-trusted-ca-bundle\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.206686 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-service-ca\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.206782 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/517bde6b-579b-4047-a627-315b3722d147-node-pullsecrets\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.206876 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/517bde6b-579b-4047-a627-315b3722d147-image-import-ca\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.206963 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-encryption-config\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.207077 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96145a82-f664-45ba-805c-3721f813c8a9-serving-cert\") pod \"controller-manager-879f6c89f-rn7cx\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.207157 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8w5f\" (UniqueName: \"kubernetes.io/projected/517bde6b-579b-4047-a627-315b3722d147-kube-api-access-h8w5f\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.211642 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhdrn\" (UniqueName: \"kubernetes.io/projected/3536e99a-ec06-422f-9944-20d3e4eca295-kube-api-access-xhdrn\") pod \"console-operator-58897d9998-bsd9j\" (UID: \"3536e99a-ec06-422f-9944-20d3e4eca295\") " pod="openshift-console-operator/console-operator-58897d9998-bsd9j" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.211807 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-console-config\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.211896 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.211968 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-etcd-client\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.208308 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zdlqd"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.212099 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.205018 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t8z8b" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.212242 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a754e420-4dd0-4ab2-b492-f088b31c3dca-config\") pod \"authentication-operator-69f744f599-9vkwc\" (UID: \"a754e420-4dd0-4ab2-b492-f088b31c3dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.212885 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbxnw"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.212901 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w28pg\" (UniqueName: \"kubernetes.io/projected/730d66a3-4e9f-4c36-9ce0-0c7e981b36ae-kube-api-access-w28pg\") pod \"openshift-config-operator-7777fb866f-bg97c\" (UID: \"730d66a3-4e9f-4c36-9ce0-0c7e981b36ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bg97c" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.212948 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.213032 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54sgt\" (UniqueName: \"kubernetes.io/projected/96145a82-f664-45ba-805c-3721f813c8a9-kube-api-access-54sgt\") pod \"controller-manager-879f6c89f-rn7cx\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.213080 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgm2s\" (UniqueName: \"kubernetes.io/projected/f6a60c76-1a30-4b5c-a984-08eef4aedb2b-kube-api-access-bgm2s\") pod \"machine-api-operator-5694c8668f-v8kwd\" (UID: \"f6a60c76-1a30-4b5c-a984-08eef4aedb2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v8kwd" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.213114 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a0c45070-058d-4223-a78e-11b1319eff38-serviceca\") pod \"image-pruner-29520000-k87pz\" (UID: \"a0c45070-058d-4223-a78e-11b1319eff38\") " pod="openshift-image-registry/image-pruner-29520000-k87pz" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.213150 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f5b48014-cea0-4a23-80a0-0022370c5e7c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bhfmf\" (UID: \"f5b48014-cea0-4a23-80a0-0022370c5e7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bhfmf" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.213189 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/517bde6b-579b-4047-a627-315b3722d147-config\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.213221 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/517bde6b-579b-4047-a627-315b3722d147-etcd-serving-ca\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.213256 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6a60c76-1a30-4b5c-a984-08eef4aedb2b-config\") pod \"machine-api-operator-5694c8668f-v8kwd\" (UID: \"f6a60c76-1a30-4b5c-a984-08eef4aedb2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v8kwd" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.213289 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-serving-cert\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.213328 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/517bde6b-579b-4047-a627-315b3722d147-audit\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.213363 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-oauth-serving-cert\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.213397 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.213436 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f6a60c76-1a30-4b5c-a984-08eef4aedb2b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-v8kwd\" (UID: \"f6a60c76-1a30-4b5c-a984-08eef4aedb2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v8kwd" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.213474 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-audit-dir\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.213522 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f3cf6a0-528b-4e38-9e30-f274f3caa4a4-config\") pod \"kube-controller-manager-operator-78b949d7b-4j499\" (UID: \"3f3cf6a0-528b-4e38-9e30-f274f3caa4a4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4j499" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.213552 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f3cf6a0-528b-4e38-9e30-f274f3caa4a4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4j499\" (UID: \"3f3cf6a0-528b-4e38-9e30-f274f3caa4a4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4j499" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.213577 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.213602 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4qlb\" (UniqueName: \"kubernetes.io/projected/a754e420-4dd0-4ab2-b492-f088b31c3dca-kube-api-access-t4qlb\") pod \"authentication-operator-69f744f599-9vkwc\" (UID: \"a754e420-4dd0-4ab2-b492-f088b31c3dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.213663 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbxnw" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.213688 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92d745b7-0280-480b-b052-c2fd5499c43e-client-ca\") pod \"route-controller-manager-6576b87f9c-k8vxr\" (UID: \"92d745b7-0280-480b-b052-c2fd5499c43e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.217650 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5psf7\" (UniqueName: \"kubernetes.io/projected/92d745b7-0280-480b-b052-c2fd5499c43e-kube-api-access-5psf7\") pod \"route-controller-manager-6576b87f9c-k8vxr\" (UID: \"92d745b7-0280-480b-b052-c2fd5499c43e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.217768 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/730d66a3-4e9f-4c36-9ce0-0c7e981b36ae-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bg97c\" (UID: \"730d66a3-4e9f-4c36-9ce0-0c7e981b36ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bg97c" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.217867 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxng9\" (UniqueName: \"kubernetes.io/projected/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-kube-api-access-pxng9\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.214081 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zdlqd" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.218002 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96145a82-f664-45ba-805c-3721f813c8a9-client-ca\") pod \"controller-manager-879f6c89f-rn7cx\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.218191 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct84l\" (UniqueName: \"kubernetes.io/projected/f5b48014-cea0-4a23-80a0-0022370c5e7c-kube-api-access-ct84l\") pod \"cluster-samples-operator-665b6dd947-bhfmf\" (UID: \"f5b48014-cea0-4a23-80a0-0022370c5e7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bhfmf" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.218290 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-console-serving-cert\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.218395 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a754e420-4dd0-4ab2-b492-f088b31c3dca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9vkwc\" (UID: \"a754e420-4dd0-4ab2-b492-f088b31c3dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.218481 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc-auth-proxy-config\") pod \"machine-approver-56656f9798-b76p7\" (UID: \"84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b76p7" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.218566 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3536e99a-ec06-422f-9944-20d3e4eca295-trusted-ca\") pod \"console-operator-58897d9998-bsd9j\" (UID: \"3536e99a-ec06-422f-9944-20d3e4eca295\") " pod="openshift-console-operator/console-operator-58897d9998-bsd9j" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.218665 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-console-oauth-config\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.218941 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a754e420-4dd0-4ab2-b492-f088b31c3dca-service-ca-bundle\") pod \"authentication-operator-69f744f599-9vkwc\" (UID: \"a754e420-4dd0-4ab2-b492-f088b31c3dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.219091 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d745b7-0280-480b-b052-c2fd5499c43e-config\") pod \"route-controller-manager-6576b87f9c-k8vxr\" (UID: \"92d745b7-0280-480b-b052-c2fd5499c43e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.219172 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.219357 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a754e420-4dd0-4ab2-b492-f088b31c3dca-serving-cert\") pod \"authentication-operator-69f744f599-9vkwc\" (UID: \"a754e420-4dd0-4ab2-b492-f088b31c3dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.219451 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc-config\") pod \"machine-approver-56656f9798-b76p7\" (UID: \"84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b76p7" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.219600 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/730d66a3-4e9f-4c36-9ce0-0c7e981b36ae-serving-cert\") pod \"openshift-config-operator-7777fb866f-bg97c\" (UID: \"730d66a3-4e9f-4c36-9ce0-0c7e981b36ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bg97c" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.219812 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f6a60c76-1a30-4b5c-a984-08eef4aedb2b-images\") pod \"machine-api-operator-5694c8668f-v8kwd\" (UID: \"f6a60c76-1a30-4b5c-a984-08eef4aedb2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v8kwd" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.219874 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc-machine-approver-tls\") pod \"machine-approver-56656f9798-b76p7\" (UID: \"84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b76p7" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.219960 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/96145a82-f664-45ba-805c-3721f813c8a9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rn7cx\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.219996 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/517bde6b-579b-4047-a627-315b3722d147-serving-cert\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.220024 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-audit-dir\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.220050 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.220086 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92d745b7-0280-480b-b052-c2fd5499c43e-serving-cert\") pod \"route-controller-manager-6576b87f9c-k8vxr\" (UID: \"92d745b7-0280-480b-b052-c2fd5499c43e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.220109 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.220133 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.220154 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.222041 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.221745 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbnms\" (UniqueName: \"kubernetes.io/projected/84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc-kube-api-access-gbnms\") pod \"machine-approver-56656f9798-b76p7\" (UID: \"84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b76p7" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.222237 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztzzj\" (UniqueName: \"kubernetes.io/projected/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-kube-api-access-ztzzj\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.222277 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-audit-policies\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.222553 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.222594 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n9s5\" (UniqueName: \"kubernetes.io/projected/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-kube-api-access-5n9s5\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.222663 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.222713 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/517bde6b-579b-4047-a627-315b3722d147-audit-dir\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.223594 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-audit-policies\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.223686 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3536e99a-ec06-422f-9944-20d3e4eca295-serving-cert\") pod \"console-operator-58897d9998-bsd9j\" (UID: \"3536e99a-ec06-422f-9944-20d3e4eca295\") " pod="openshift-console-operator/console-operator-58897d9998-bsd9j" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.223729 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/517bde6b-579b-4047-a627-315b3722d147-etcd-client\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.223758 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96145a82-f664-45ba-805c-3721f813c8a9-config\") pod \"controller-manager-879f6c89f-rn7cx\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.223798 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l57wn\" (UniqueName: \"kubernetes.io/projected/a0c45070-058d-4223-a78e-11b1319eff38-kube-api-access-l57wn\") pod \"image-pruner-29520000-k87pz\" (UID: \"a0c45070-058d-4223-a78e-11b1319eff38\") " pod="openshift-image-registry/image-pruner-29520000-k87pz" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.223953 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3536e99a-ec06-422f-9944-20d3e4eca295-config\") pod \"console-operator-58897d9998-bsd9j\" (UID: \"3536e99a-ec06-422f-9944-20d3e4eca295\") " pod="openshift-console-operator/console-operator-58897d9998-bsd9j" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.226265 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-2dhzm"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.229957 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.242541 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.248203 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.248953 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xt8xd"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.249417 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fh5qr"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.249854 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dmvbr"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.250156 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.250502 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.250918 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.251080 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cbn44"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.251180 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.251327 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xt8xd" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.251470 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fh5qr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.251686 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.251667 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cbn44" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.252213 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dmvbr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.253111 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5sr9r"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.255299 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cmfwl"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.255581 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5sr9r" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.256195 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cmfwl" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.256302 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.257142 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xks74"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.257414 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.257415 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.258591 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xks74" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.258833 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-g6jpt"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.259634 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-g6jpt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.260073 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-dc98z"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.260573 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dc98z" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.261188 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wdfct"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.262297 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.262576 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bg97c"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.263891 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.266947 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g9bpw"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.268292 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4j499"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.269289 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29520000-k87pz"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.270706 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.272404 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.273597 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-kkhhq"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.274587 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rn7cx"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.276771 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.280535 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbxnw"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.283289 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x6shs"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.284787 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.285954 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ckzvr"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.287126 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9vkwc"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.288288 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xt8xd"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.289477 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bsd9j"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.290585 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dmvbr"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.291754 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-v8kwd"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.293286 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6wzm8"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.294517 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4qvcf"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.295849 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-ngf8q"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.297271 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.297851 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ngf8q" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.299272 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t8g5k"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.304841 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zdlqd"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.309369 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-db2vn"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.315809 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.317238 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.317309 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.318692 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hbrff"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.319933 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fh5qr"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.321079 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ngf8q"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.322282 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wdfct"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.323489 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t8z8b"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.324693 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96145a82-f664-45ba-805c-3721f813c8a9-config\") pod \"controller-manager-879f6c89f-rn7cx\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.324803 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l57wn\" (UniqueName: \"kubernetes.io/projected/a0c45070-058d-4223-a78e-11b1319eff38-kube-api-access-l57wn\") pod \"image-pruner-29520000-k87pz\" (UID: \"a0c45070-058d-4223-a78e-11b1319eff38\") " pod="openshift-image-registry/image-pruner-29520000-k87pz" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.324881 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3536e99a-ec06-422f-9944-20d3e4eca295-config\") pod \"console-operator-58897d9998-bsd9j\" (UID: \"3536e99a-ec06-422f-9944-20d3e4eca295\") " pod="openshift-console-operator/console-operator-58897d9998-bsd9j" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.324959 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/517bde6b-579b-4047-a627-315b3722d147-encryption-config\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.325029 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f3cf6a0-528b-4e38-9e30-f274f3caa4a4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4j499\" (UID: \"3f3cf6a0-528b-4e38-9e30-f274f3caa4a4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4j499" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.325075 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cbn44"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.325157 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.325237 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-trusted-ca-bundle\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.325307 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/517bde6b-579b-4047-a627-315b3722d147-node-pullsecrets\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.325403 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/517bde6b-579b-4047-a627-315b3722d147-image-import-ca\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.325475 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/517bde6b-579b-4047-a627-315b3722d147-trusted-ca-bundle\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.325559 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-service-ca\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.325655 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-encryption-config\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.325729 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96145a82-f664-45ba-805c-3721f813c8a9-serving-cert\") pod \"controller-manager-879f6c89f-rn7cx\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.325797 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8w5f\" (UniqueName: \"kubernetes.io/projected/517bde6b-579b-4047-a627-315b3722d147-kube-api-access-h8w5f\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.325880 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhdrn\" (UniqueName: \"kubernetes.io/projected/3536e99a-ec06-422f-9944-20d3e4eca295-kube-api-access-xhdrn\") pod \"console-operator-58897d9998-bsd9j\" (UID: \"3536e99a-ec06-422f-9944-20d3e4eca295\") " pod="openshift-console-operator/console-operator-58897d9998-bsd9j" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.325956 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-console-config\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.326044 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3536e99a-ec06-422f-9944-20d3e4eca295-config\") pod \"console-operator-58897d9998-bsd9j\" (UID: \"3536e99a-ec06-422f-9944-20d3e4eca295\") " pod="openshift-console-operator/console-operator-58897d9998-bsd9j" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.325516 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/517bde6b-579b-4047-a627-315b3722d147-node-pullsecrets\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.326098 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.326199 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.326298 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-etcd-client\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.326370 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.326481 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w28pg\" (UniqueName: \"kubernetes.io/projected/730d66a3-4e9f-4c36-9ce0-0c7e981b36ae-kube-api-access-w28pg\") pod \"openshift-config-operator-7777fb866f-bg97c\" (UID: \"730d66a3-4e9f-4c36-9ce0-0c7e981b36ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bg97c" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.326556 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.326645 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a754e420-4dd0-4ab2-b492-f088b31c3dca-config\") pod \"authentication-operator-69f744f599-9vkwc\" (UID: \"a754e420-4dd0-4ab2-b492-f088b31c3dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.326730 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54sgt\" (UniqueName: \"kubernetes.io/projected/96145a82-f664-45ba-805c-3721f813c8a9-kube-api-access-54sgt\") pod \"controller-manager-879f6c89f-rn7cx\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.326798 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgm2s\" (UniqueName: \"kubernetes.io/projected/f6a60c76-1a30-4b5c-a984-08eef4aedb2b-kube-api-access-bgm2s\") pod \"machine-api-operator-5694c8668f-v8kwd\" (UID: \"f6a60c76-1a30-4b5c-a984-08eef4aedb2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v8kwd" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.326884 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a0c45070-058d-4223-a78e-11b1319eff38-serviceca\") pod \"image-pruner-29520000-k87pz\" (UID: \"a0c45070-058d-4223-a78e-11b1319eff38\") " pod="openshift-image-registry/image-pruner-29520000-k87pz" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.326958 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f5b48014-cea0-4a23-80a0-0022370c5e7c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bhfmf\" (UID: \"f5b48014-cea0-4a23-80a0-0022370c5e7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bhfmf" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.327034 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/517bde6b-579b-4047-a627-315b3722d147-config\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.327106 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/517bde6b-579b-4047-a627-315b3722d147-etcd-serving-ca\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.327152 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/517bde6b-579b-4047-a627-315b3722d147-trusted-ca-bundle\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.327229 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6a60c76-1a30-4b5c-a984-08eef4aedb2b-config\") pod \"machine-api-operator-5694c8668f-v8kwd\" (UID: \"f6a60c76-1a30-4b5c-a984-08eef4aedb2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v8kwd" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.327282 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-trusted-ca-bundle\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.326841 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/517bde6b-579b-4047-a627-315b3722d147-image-import-ca\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328013 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-service-ca\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.327298 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-serving-cert\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328100 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/517bde6b-579b-4047-a627-315b3722d147-audit\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328127 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-oauth-serving-cert\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328162 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328193 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-audit-dir\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328222 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f6a60c76-1a30-4b5c-a984-08eef4aedb2b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-v8kwd\" (UID: \"f6a60c76-1a30-4b5c-a984-08eef4aedb2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v8kwd" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328251 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f3cf6a0-528b-4e38-9e30-f274f3caa4a4-config\") pod \"kube-controller-manager-operator-78b949d7b-4j499\" (UID: \"3f3cf6a0-528b-4e38-9e30-f274f3caa4a4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4j499" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328277 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f3cf6a0-528b-4e38-9e30-f274f3caa4a4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4j499\" (UID: \"3f3cf6a0-528b-4e38-9e30-f274f3caa4a4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4j499" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328302 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328338 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92d745b7-0280-480b-b052-c2fd5499c43e-client-ca\") pod \"route-controller-manager-6576b87f9c-k8vxr\" (UID: \"92d745b7-0280-480b-b052-c2fd5499c43e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328360 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4qlb\" (UniqueName: \"kubernetes.io/projected/a754e420-4dd0-4ab2-b492-f088b31c3dca-kube-api-access-t4qlb\") pod \"authentication-operator-69f744f599-9vkwc\" (UID: \"a754e420-4dd0-4ab2-b492-f088b31c3dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328391 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5psf7\" (UniqueName: \"kubernetes.io/projected/92d745b7-0280-480b-b052-c2fd5499c43e-kube-api-access-5psf7\") pod \"route-controller-manager-6576b87f9c-k8vxr\" (UID: \"92d745b7-0280-480b-b052-c2fd5499c43e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328416 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/730d66a3-4e9f-4c36-9ce0-0c7e981b36ae-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bg97c\" (UID: \"730d66a3-4e9f-4c36-9ce0-0c7e981b36ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bg97c" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328456 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxng9\" (UniqueName: \"kubernetes.io/projected/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-kube-api-access-pxng9\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328476 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/517bde6b-579b-4047-a627-315b3722d147-config\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328500 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96145a82-f664-45ba-805c-3721f813c8a9-client-ca\") pod \"controller-manager-879f6c89f-rn7cx\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.326846 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96145a82-f664-45ba-805c-3721f813c8a9-config\") pod \"controller-manager-879f6c89f-rn7cx\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328534 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct84l\" (UniqueName: \"kubernetes.io/projected/f5b48014-cea0-4a23-80a0-0022370c5e7c-kube-api-access-ct84l\") pod \"cluster-samples-operator-665b6dd947-bhfmf\" (UID: \"f5b48014-cea0-4a23-80a0-0022370c5e7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bhfmf" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328571 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc-auth-proxy-config\") pod \"machine-approver-56656f9798-b76p7\" (UID: \"84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b76p7" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328598 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3536e99a-ec06-422f-9944-20d3e4eca295-trusted-ca\") pod \"console-operator-58897d9998-bsd9j\" (UID: \"3536e99a-ec06-422f-9944-20d3e4eca295\") " pod="openshift-console-operator/console-operator-58897d9998-bsd9j" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328642 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-console-serving-cert\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328673 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a754e420-4dd0-4ab2-b492-f088b31c3dca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9vkwc\" (UID: \"a754e420-4dd0-4ab2-b492-f088b31c3dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328701 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-console-oauth-config\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328727 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a754e420-4dd0-4ab2-b492-f088b31c3dca-service-ca-bundle\") pod \"authentication-operator-69f744f599-9vkwc\" (UID: \"a754e420-4dd0-4ab2-b492-f088b31c3dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328756 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d745b7-0280-480b-b052-c2fd5499c43e-config\") pod \"route-controller-manager-6576b87f9c-k8vxr\" (UID: \"92d745b7-0280-480b-b052-c2fd5499c43e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328782 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328807 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a754e420-4dd0-4ab2-b492-f088b31c3dca-serving-cert\") pod \"authentication-operator-69f744f599-9vkwc\" (UID: \"a754e420-4dd0-4ab2-b492-f088b31c3dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328852 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc-config\") pod \"machine-approver-56656f9798-b76p7\" (UID: \"84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b76p7" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328878 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/730d66a3-4e9f-4c36-9ce0-0c7e981b36ae-serving-cert\") pod \"openshift-config-operator-7777fb866f-bg97c\" (UID: \"730d66a3-4e9f-4c36-9ce0-0c7e981b36ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bg97c" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328903 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f6a60c76-1a30-4b5c-a984-08eef4aedb2b-images\") pod \"machine-api-operator-5694c8668f-v8kwd\" (UID: \"f6a60c76-1a30-4b5c-a984-08eef4aedb2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v8kwd" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328930 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc-machine-approver-tls\") pod \"machine-approver-56656f9798-b76p7\" (UID: \"84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b76p7" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328956 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/96145a82-f664-45ba-805c-3721f813c8a9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rn7cx\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.328980 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/517bde6b-579b-4047-a627-315b3722d147-serving-cert\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.329019 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-audit-dir\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.329042 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.329074 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92d745b7-0280-480b-b052-c2fd5499c43e-serving-cert\") pod \"route-controller-manager-6576b87f9c-k8vxr\" (UID: \"92d745b7-0280-480b-b052-c2fd5499c43e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.329099 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.329124 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.329150 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbnms\" (UniqueName: \"kubernetes.io/projected/84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc-kube-api-access-gbnms\") pod \"machine-approver-56656f9798-b76p7\" (UID: \"84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b76p7" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.329180 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztzzj\" (UniqueName: \"kubernetes.io/projected/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-kube-api-access-ztzzj\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.329203 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-audit-policies\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.329229 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.329254 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n9s5\" (UniqueName: \"kubernetes.io/projected/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-kube-api-access-5n9s5\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.329279 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.329307 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.329358 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3536e99a-ec06-422f-9944-20d3e4eca295-serving-cert\") pod \"console-operator-58897d9998-bsd9j\" (UID: \"3536e99a-ec06-422f-9944-20d3e4eca295\") " pod="openshift-console-operator/console-operator-58897d9998-bsd9j" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.329385 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/517bde6b-579b-4047-a627-315b3722d147-etcd-client\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.329407 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/517bde6b-579b-4047-a627-315b3722d147-audit-dir\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.329430 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-audit-policies\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.329945 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bhfmf"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.329983 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.330078 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-audit-policies\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.330220 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a0c45070-058d-4223-a78e-11b1319eff38-serviceca\") pod \"image-pruner-29520000-k87pz\" (UID: \"a0c45070-058d-4223-a78e-11b1319eff38\") " pod="openshift-image-registry/image-pruner-29520000-k87pz" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.330865 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/517bde6b-579b-4047-a627-315b3722d147-etcd-serving-ca\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.330950 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/517bde6b-579b-4047-a627-315b3722d147-audit\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.331590 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-encryption-config\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.331590 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/517bde6b-579b-4047-a627-315b3722d147-encryption-config\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.332086 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dc98z"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.332374 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a754e420-4dd0-4ab2-b492-f088b31c3dca-config\") pod \"authentication-operator-69f744f599-9vkwc\" (UID: \"a754e420-4dd0-4ab2-b492-f088b31c3dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.332471 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.332656 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.332673 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-console-config\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.333144 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xks74"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.333476 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-audit-policies\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.333591 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-audit-dir\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.333630 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/96145a82-f664-45ba-805c-3721f813c8a9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rn7cx\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.333695 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-audit-dir\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.334115 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.334216 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc-config\") pod \"machine-approver-56656f9798-b76p7\" (UID: \"84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b76p7" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.334669 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.334850 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f5b48014-cea0-4a23-80a0-0022370c5e7c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bhfmf\" (UID: \"f5b48014-cea0-4a23-80a0-0022370c5e7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bhfmf" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.335047 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.335539 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6a60c76-1a30-4b5c-a984-08eef4aedb2b-config\") pod \"machine-api-operator-5694c8668f-v8kwd\" (UID: \"f6a60c76-1a30-4b5c-a984-08eef4aedb2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v8kwd" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.335716 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f3cf6a0-528b-4e38-9e30-f274f3caa4a4-config\") pod \"kube-controller-manager-operator-78b949d7b-4j499\" (UID: \"3f3cf6a0-528b-4e38-9e30-f274f3caa4a4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4j499" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.335856 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.336115 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92d745b7-0280-480b-b052-c2fd5499c43e-client-ca\") pod \"route-controller-manager-6576b87f9c-k8vxr\" (UID: \"92d745b7-0280-480b-b052-c2fd5499c43e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.336324 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.336415 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f6a60c76-1a30-4b5c-a984-08eef4aedb2b-images\") pod \"machine-api-operator-5694c8668f-v8kwd\" (UID: \"f6a60c76-1a30-4b5c-a984-08eef4aedb2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v8kwd" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.336494 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-g6jpt"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.336719 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3536e99a-ec06-422f-9944-20d3e4eca295-serving-cert\") pod \"console-operator-58897d9998-bsd9j\" (UID: \"3536e99a-ec06-422f-9944-20d3e4eca295\") " pod="openshift-console-operator/console-operator-58897d9998-bsd9j" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.337453 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a754e420-4dd0-4ab2-b492-f088b31c3dca-service-ca-bundle\") pod \"authentication-operator-69f744f599-9vkwc\" (UID: \"a754e420-4dd0-4ab2-b492-f088b31c3dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.337510 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.337903 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96145a82-f664-45ba-805c-3721f813c8a9-client-ca\") pod \"controller-manager-879f6c89f-rn7cx\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.337985 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5sr9r"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.338000 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.338193 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a754e420-4dd0-4ab2-b492-f088b31c3dca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9vkwc\" (UID: \"a754e420-4dd0-4ab2-b492-f088b31c3dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.338301 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/730d66a3-4e9f-4c36-9ce0-0c7e981b36ae-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bg97c\" (UID: \"730d66a3-4e9f-4c36-9ce0-0c7e981b36ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bg97c" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.338528 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/517bde6b-579b-4047-a627-315b3722d147-audit-dir\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.338604 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc-auth-proxy-config\") pod \"machine-approver-56656f9798-b76p7\" (UID: \"84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b76p7" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.339532 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cmfwl"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.341023 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.341307 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f3cf6a0-528b-4e38-9e30-f274f3caa4a4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4j499\" (UID: \"3f3cf6a0-528b-4e38-9e30-f274f3caa4a4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4j499" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.341361 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rcxmx"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.341760 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/517bde6b-579b-4047-a627-315b3722d147-etcd-client\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.341761 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96145a82-f664-45ba-805c-3721f813c8a9-serving-cert\") pod \"controller-manager-879f6c89f-rn7cx\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.341866 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3536e99a-ec06-422f-9944-20d3e4eca295-trusted-ca\") pod \"console-operator-58897d9998-bsd9j\" (UID: \"3536e99a-ec06-422f-9944-20d3e4eca295\") " pod="openshift-console-operator/console-operator-58897d9998-bsd9j" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.342132 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-console-oauth-config\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.342296 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-console-serving-cert\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.342357 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-etcd-client\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.342320 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-oauth-serving-cert\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.342462 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.342540 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc-machine-approver-tls\") pod \"machine-approver-56656f9798-b76p7\" (UID: \"84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b76p7" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.342461 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-serving-cert\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.342918 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a754e420-4dd0-4ab2-b492-f088b31c3dca-serving-cert\") pod \"authentication-operator-69f744f599-9vkwc\" (UID: \"a754e420-4dd0-4ab2-b492-f088b31c3dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.342928 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.343074 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rcxmx"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.343202 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.343527 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f6a60c76-1a30-4b5c-a984-08eef4aedb2b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-v8kwd\" (UID: \"f6a60c76-1a30-4b5c-a984-08eef4aedb2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v8kwd" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.343946 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-l55t7"] Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.344254 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92d745b7-0280-480b-b052-c2fd5499c43e-serving-cert\") pod \"route-controller-manager-6576b87f9c-k8vxr\" (UID: \"92d745b7-0280-480b-b052-c2fd5499c43e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.344482 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/730d66a3-4e9f-4c36-9ce0-0c7e981b36ae-serving-cert\") pod \"openshift-config-operator-7777fb866f-bg97c\" (UID: \"730d66a3-4e9f-4c36-9ce0-0c7e981b36ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bg97c" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.344734 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-l55t7" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.345372 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.346215 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/517bde6b-579b-4047-a627-315b3722d147-serving-cert\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.346663 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d745b7-0280-480b-b052-c2fd5499c43e-config\") pod \"route-controller-manager-6576b87f9c-k8vxr\" (UID: \"92d745b7-0280-480b-b052-c2fd5499c43e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.351903 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.357776 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.379711 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.397334 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.417664 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.439236 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.458045 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.479128 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.498266 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.517875 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.538113 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.557426 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.578421 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.617580 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.638434 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.658478 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.678781 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.698546 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.719084 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.737973 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.758604 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.778488 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.798224 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.817524 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.838558 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.859549 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.878644 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.898665 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.918396 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.938894 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.958448 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.978992 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 00:08:51 crc kubenswrapper[4698]: I0216 00:08:51.998601 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.461931 4698 request.go:700] Waited for 1.860863903s due to client-side throttling, not priority and fairness, request: PATCH:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-76f77b778f-kkhhq/status Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.477508 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.489046 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.489117 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.489316 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.489913 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.490048 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.490161 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.490254 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.490359 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.490473 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.490582 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.490724 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.490847 4698 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.490973 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.491240 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.491373 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.491850 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.490211 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.492256 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.492444 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.492636 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.492922 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.494791 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.495045 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.496594 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.497598 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.501353 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.501981 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.502412 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.503766 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.504503 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.505001 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.505332 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.505875 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.506404 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.511158 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.511300 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.521922 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.524886 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.529716 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.530757 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.532750 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5psf7\" (UniqueName: \"kubernetes.io/projected/92d745b7-0280-480b-b052-c2fd5499c43e-kube-api-access-5psf7\") pod \"route-controller-manager-6576b87f9c-k8vxr\" (UID: \"92d745b7-0280-480b-b052-c2fd5499c43e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.532857 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w28pg\" (UniqueName: \"kubernetes.io/projected/730d66a3-4e9f-4c36-9ce0-0c7e981b36ae-kube-api-access-w28pg\") pod \"openshift-config-operator-7777fb866f-bg97c\" (UID: \"730d66a3-4e9f-4c36-9ce0-0c7e981b36ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bg97c" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.533126 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.533608 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.534128 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.534175 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.534292 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.534770 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.534990 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.540060 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxng9\" (UniqueName: \"kubernetes.io/projected/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-kube-api-access-pxng9\") pod \"oauth-openshift-558db77b4-ckzvr\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.541332 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8w5f\" (UniqueName: \"kubernetes.io/projected/517bde6b-579b-4047-a627-315b3722d147-kube-api-access-h8w5f\") pod \"apiserver-76f77b778f-kkhhq\" (UID: \"517bde6b-579b-4047-a627-315b3722d147\") " pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.541643 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bg97c" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.542743 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n9s5\" (UniqueName: \"kubernetes.io/projected/b28deeab-0e2e-4d78-a65e-fbd423e08cf2-kube-api-access-5n9s5\") pod \"apiserver-7bbb656c7d-rzgh6\" (UID: \"b28deeab-0e2e-4d78-a65e-fbd423e08cf2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.546474 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.547422 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.547851 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.553304 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l57wn\" (UniqueName: \"kubernetes.io/projected/a0c45070-058d-4223-a78e-11b1319eff38-kube-api-access-l57wn\") pod \"image-pruner-29520000-k87pz\" (UID: \"a0c45070-058d-4223-a78e-11b1319eff38\") " pod="openshift-image-registry/image-pruner-29520000-k87pz" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.555239 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4qlb\" (UniqueName: \"kubernetes.io/projected/a754e420-4dd0-4ab2-b492-f088b31c3dca-kube-api-access-t4qlb\") pod \"authentication-operator-69f744f599-9vkwc\" (UID: \"a754e420-4dd0-4ab2-b492-f088b31c3dca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.555558 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.555807 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbnms\" (UniqueName: \"kubernetes.io/projected/84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc-kube-api-access-gbnms\") pod \"machine-approver-56656f9798-b76p7\" (UID: \"84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b76p7" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.556082 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.556378 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.556562 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.564687 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhdrn\" (UniqueName: \"kubernetes.io/projected/3536e99a-ec06-422f-9944-20d3e4eca295-kube-api-access-xhdrn\") pod \"console-operator-58897d9998-bsd9j\" (UID: \"3536e99a-ec06-422f-9944-20d3e4eca295\") " pod="openshift-console-operator/console-operator-58897d9998-bsd9j" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.565533 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f3cf6a0-528b-4e38-9e30-f274f3caa4a4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4j499\" (UID: \"3f3cf6a0-528b-4e38-9e30-f274f3caa4a4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4j499" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.567222 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.570391 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgm2s\" (UniqueName: \"kubernetes.io/projected/f6a60c76-1a30-4b5c-a984-08eef4aedb2b-kube-api-access-bgm2s\") pod \"machine-api-operator-5694c8668f-v8kwd\" (UID: \"f6a60c76-1a30-4b5c-a984-08eef4aedb2b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v8kwd" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.571561 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54sgt\" (UniqueName: \"kubernetes.io/projected/96145a82-f664-45ba-805c-3721f813c8a9-kube-api-access-54sgt\") pod \"controller-manager-879f6c89f-rn7cx\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.566245 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztzzj\" (UniqueName: \"kubernetes.io/projected/4678f0b3-74d6-4ea2-9294-6c9bc5e9de25-kube-api-access-ztzzj\") pod \"console-f9d7485db-x6shs\" (UID: \"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25\") " pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.574579 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8btd\" (UniqueName: \"kubernetes.io/projected/5a30a526-d4c0-437f-a4e5-97b8a4dd32cf-kube-api-access-k8btd\") pod \"etcd-operator-b45778765-6wzm8\" (UID: \"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.574709 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct84l\" (UniqueName: \"kubernetes.io/projected/f5b48014-cea0-4a23-80a0-0022370c5e7c-kube-api-access-ct84l\") pod \"cluster-samples-operator-665b6dd947-bhfmf\" (UID: \"f5b48014-cea0-4a23-80a0-0022370c5e7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bhfmf" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.574759 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1-metrics-tls\") pod \"ingress-operator-5b745b69d9-nvkhl\" (UID: \"d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.574821 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a30a526-d4c0-437f-a4e5-97b8a4dd32cf-serving-cert\") pod \"etcd-operator-b45778765-6wzm8\" (UID: \"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.574924 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjz67\" (UniqueName: \"kubernetes.io/projected/2162c872-a146-4e1c-8173-103f521103f0-kube-api-access-gjz67\") pod \"openshift-apiserver-operator-796bbdcf4f-jbxnw\" (UID: \"2162c872-a146-4e1c-8173-103f521103f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbxnw" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.574990 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3361de5c-c22b-46a4-b354-13c3e16a3d78-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t8z8b\" (UID: \"3361de5c-c22b-46a4-b354-13c3e16a3d78\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t8z8b" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.575199 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396cf171-d75d-4c04-9211-098223756513-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-g9bpw\" (UID: \"396cf171-d75d-4c04-9211-098223756513\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g9bpw" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.575309 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7af2ddd8-b028-4713-8550-6cac706db73f-srv-cert\") pod \"catalog-operator-68c6474976-qp9bm\" (UID: \"7af2ddd8-b028-4713-8550-6cac706db73f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.575403 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2xcl\" (UniqueName: \"kubernetes.io/projected/981e423b-1add-418e-bd9b-01d92d9a66cf-kube-api-access-h2xcl\") pod \"service-ca-operator-777779d784-cbn44\" (UID: \"981e423b-1add-418e-bd9b-01d92d9a66cf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cbn44" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.575488 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjctd\" (UniqueName: \"kubernetes.io/projected/ec24b0ba-9563-4228-af90-7774e49f5505-kube-api-access-wjctd\") pod \"downloads-7954f5f757-4qvcf\" (UID: \"ec24b0ba-9563-4228-af90-7774e49f5505\") " pod="openshift-console/downloads-7954f5f757-4qvcf" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.575565 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f90bba31-5d41-4e72-86b5-2b208c15fd27-proxy-tls\") pod \"machine-config-operator-74547568cd-d4pz6\" (UID: \"f90bba31-5d41-4e72-86b5-2b208c15fd27\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.575674 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96-config-volume\") pod \"collect-profiles-29520000-bqmjt\" (UID: \"d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.575738 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8svb\" (UniqueName: \"kubernetes.io/projected/2abff28e-c9b9-41b3-be02-5876c5e4b91d-kube-api-access-k8svb\") pod \"cluster-image-registry-operator-dc59b4c8b-99zqn\" (UID: \"2abff28e-c9b9-41b3-be02-5876c5e4b91d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.576777 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.578196 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nvkhl\" (UID: \"d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.578280 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/36efba2c-72ec-4f18-b467-4dc7f2245de9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5sr9r\" (UID: \"36efba2c-72ec-4f18-b467-4dc7f2245de9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5sr9r" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.578671 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63fb50bd-c7be-4229-80c6-26017b6bac3b-metrics-tls\") pod \"dns-operator-744455d44c-db2vn\" (UID: \"63fb50bd-c7be-4229-80c6-26017b6bac3b\") " pod="openshift-dns-operator/dns-operator-744455d44c-db2vn" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.578772 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq895\" (UniqueName: \"kubernetes.io/projected/d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96-kube-api-access-nq895\") pod \"collect-profiles-29520000-bqmjt\" (UID: \"d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.578950 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ca33a8b-6e95-4a0f-9892-53b18a92b078-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zdlqd\" (UID: \"3ca33a8b-6e95-4a0f-9892-53b18a92b078\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zdlqd" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.578989 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv8qg\" (UniqueName: \"kubernetes.io/projected/260201f8-5e3b-4ba4-a945-13fe10a8ad3a-kube-api-access-gv8qg\") pod \"migrator-59844c95c7-fh5qr\" (UID: \"260201f8-5e3b-4ba4-a945-13fe10a8ad3a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fh5qr" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.582237 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.579079 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfnfc\" (UniqueName: \"kubernetes.io/projected/965a4f3a-6e82-46eb-964a-4c7f40fdaf0e-kube-api-access-zfnfc\") pod \"kube-storage-version-migrator-operator-b67b599dd-t8g5k\" (UID: \"965a4f3a-6e82-46eb-964a-4c7f40fdaf0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t8g5k" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.582999 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.583050 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/981e423b-1add-418e-bd9b-01d92d9a66cf-config\") pod \"service-ca-operator-777779d784-cbn44\" (UID: \"981e423b-1add-418e-bd9b-01d92d9a66cf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cbn44" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.583086 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbcr5\" (UniqueName: \"kubernetes.io/projected/b700b649-4899-457c-afe5-575cf4a8907e-kube-api-access-nbcr5\") pod \"router-default-5444994796-2dhzm\" (UID: \"b700b649-4899-457c-afe5-575cf4a8907e\") " pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.583153 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e628ec8-31d7-43de-9c56-58f049dd8935-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.583189 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e628ec8-31d7-43de-9c56-58f049dd8935-registry-tls\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.583210 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5a30a526-d4c0-437f-a4e5-97b8a4dd32cf-etcd-ca\") pod \"etcd-operator-b45778765-6wzm8\" (UID: \"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.583237 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2abff28e-c9b9-41b3-be02-5876c5e4b91d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-99zqn\" (UID: \"2abff28e-c9b9-41b3-be02-5876c5e4b91d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.583495 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnncf\" (UniqueName: \"kubernetes.io/projected/9e628ec8-31d7-43de-9c56-58f049dd8935-kube-api-access-qnncf\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.583529 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm8cq\" (UniqueName: \"kubernetes.io/projected/396cf171-d75d-4c04-9211-098223756513-kube-api-access-vm8cq\") pod \"openshift-controller-manager-operator-756b6f6bc6-g9bpw\" (UID: \"396cf171-d75d-4c04-9211-098223756513\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g9bpw" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.583632 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96-secret-volume\") pod \"collect-profiles-29520000-bqmjt\" (UID: \"d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.583924 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b700b649-4899-457c-afe5-575cf4a8907e-stats-auth\") pod \"router-default-5444994796-2dhzm\" (UID: \"b700b649-4899-457c-afe5-575cf4a8907e\") " pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.584024 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ca33a8b-6e95-4a0f-9892-53b18a92b078-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zdlqd\" (UID: \"3ca33a8b-6e95-4a0f-9892-53b18a92b078\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zdlqd" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.584228 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e628ec8-31d7-43de-9c56-58f049dd8935-registry-certificates\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.584600 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a30a526-d4c0-437f-a4e5-97b8a4dd32cf-etcd-service-ca\") pod \"etcd-operator-b45778765-6wzm8\" (UID: \"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.584682 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b700b649-4899-457c-afe5-575cf4a8907e-default-certificate\") pod \"router-default-5444994796-2dhzm\" (UID: \"b700b649-4899-457c-afe5-575cf4a8907e\") " pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.585327 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f90bba31-5d41-4e72-86b5-2b208c15fd27-auth-proxy-config\") pod \"machine-config-operator-74547568cd-d4pz6\" (UID: \"f90bba31-5d41-4e72-86b5-2b208c15fd27\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.585400 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3361de5c-c22b-46a4-b354-13c3e16a3d78-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t8z8b\" (UID: \"3361de5c-c22b-46a4-b354-13c3e16a3d78\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t8z8b" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.585427 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2abff28e-c9b9-41b3-be02-5876c5e4b91d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-99zqn\" (UID: \"2abff28e-c9b9-41b3-be02-5876c5e4b91d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.585455 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1791eeb4-5348-4fe9-9cea-0eba0d00c869-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xt8xd\" (UID: \"1791eeb4-5348-4fe9-9cea-0eba0d00c869\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xt8xd" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.585503 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5smm\" (UniqueName: \"kubernetes.io/projected/36efba2c-72ec-4f18-b467-4dc7f2245de9-kube-api-access-s5smm\") pod \"olm-operator-6b444d44fb-5sr9r\" (UID: \"36efba2c-72ec-4f18-b467-4dc7f2245de9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5sr9r" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.585532 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/965a4f3a-6e82-46eb-964a-4c7f40fdaf0e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-t8g5k\" (UID: \"965a4f3a-6e82-46eb-964a-4c7f40fdaf0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t8g5k" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.585572 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3361de5c-c22b-46a4-b354-13c3e16a3d78-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t8z8b\" (UID: \"3361de5c-c22b-46a4-b354-13c3e16a3d78\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t8z8b" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.585595 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/36efba2c-72ec-4f18-b467-4dc7f2245de9-srv-cert\") pod \"olm-operator-6b444d44fb-5sr9r\" (UID: \"36efba2c-72ec-4f18-b467-4dc7f2245de9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5sr9r" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.585675 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ca33a8b-6e95-4a0f-9892-53b18a92b078-config\") pod \"kube-apiserver-operator-766d6c64bb-zdlqd\" (UID: \"3ca33a8b-6e95-4a0f-9892-53b18a92b078\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zdlqd" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.585701 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqgww\" (UniqueName: \"kubernetes.io/projected/b89dcc24-5331-4c05-9a27-5f4415a7faf1-kube-api-access-rqgww\") pod \"control-plane-machine-set-operator-78cbb6b69f-dmvbr\" (UID: \"b89dcc24-5331-4c05-9a27-5f4415a7faf1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dmvbr" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.585742 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlbvc\" (UniqueName: \"kubernetes.io/projected/d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1-kube-api-access-jlbvc\") pod \"ingress-operator-5b745b69d9-nvkhl\" (UID: \"d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.585809 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg8r6\" (UniqueName: \"kubernetes.io/projected/63fb50bd-c7be-4229-80c6-26017b6bac3b-kube-api-access-jg8r6\") pod \"dns-operator-744455d44c-db2vn\" (UID: \"63fb50bd-c7be-4229-80c6-26017b6bac3b\") " pod="openshift-dns-operator/dns-operator-744455d44c-db2vn" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.585830 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b700b649-4899-457c-afe5-575cf4a8907e-metrics-certs\") pod \"router-default-5444994796-2dhzm\" (UID: \"b700b649-4899-457c-afe5-575cf4a8907e\") " pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.585869 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5a30a526-d4c0-437f-a4e5-97b8a4dd32cf-etcd-client\") pod \"etcd-operator-b45778765-6wzm8\" (UID: \"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.586002 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e628ec8-31d7-43de-9c56-58f049dd8935-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.586029 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/396cf171-d75d-4c04-9211-098223756513-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-g9bpw\" (UID: \"396cf171-d75d-4c04-9211-098223756513\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g9bpw" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.586100 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t5l5\" (UniqueName: \"kubernetes.io/projected/1791eeb4-5348-4fe9-9cea-0eba0d00c869-kube-api-access-6t5l5\") pod \"multus-admission-controller-857f4d67dd-xt8xd\" (UID: \"1791eeb4-5348-4fe9-9cea-0eba0d00c869\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xt8xd" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.586144 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e628ec8-31d7-43de-9c56-58f049dd8935-bound-sa-token\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.586183 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1-trusted-ca\") pod \"ingress-operator-5b745b69d9-nvkhl\" (UID: \"d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.586214 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b89dcc24-5331-4c05-9a27-5f4415a7faf1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dmvbr\" (UID: \"b89dcc24-5331-4c05-9a27-5f4415a7faf1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dmvbr" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.586254 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2162c872-a146-4e1c-8173-103f521103f0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jbxnw\" (UID: \"2162c872-a146-4e1c-8173-103f521103f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbxnw" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.586284 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a30a526-d4c0-437f-a4e5-97b8a4dd32cf-config\") pod \"etcd-operator-b45778765-6wzm8\" (UID: \"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.586304 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhhng\" (UniqueName: \"kubernetes.io/projected/f90bba31-5d41-4e72-86b5-2b208c15fd27-kube-api-access-vhhng\") pod \"machine-config-operator-74547568cd-d4pz6\" (UID: \"f90bba31-5d41-4e72-86b5-2b208c15fd27\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.586339 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2abff28e-c9b9-41b3-be02-5876c5e4b91d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-99zqn\" (UID: \"2abff28e-c9b9-41b3-be02-5876c5e4b91d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.586377 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f90bba31-5d41-4e72-86b5-2b208c15fd27-images\") pod \"machine-config-operator-74547568cd-d4pz6\" (UID: \"f90bba31-5d41-4e72-86b5-2b208c15fd27\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.586394 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2162c872-a146-4e1c-8173-103f521103f0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jbxnw\" (UID: \"2162c872-a146-4e1c-8173-103f521103f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbxnw" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.586414 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7af2ddd8-b028-4713-8550-6cac706db73f-profile-collector-cert\") pod \"catalog-operator-68c6474976-qp9bm\" (UID: \"7af2ddd8-b028-4713-8550-6cac706db73f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.586637 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/981e423b-1add-418e-bd9b-01d92d9a66cf-serving-cert\") pod \"service-ca-operator-777779d784-cbn44\" (UID: \"981e423b-1add-418e-bd9b-01d92d9a66cf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cbn44" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.586673 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9kll\" (UniqueName: \"kubernetes.io/projected/7af2ddd8-b028-4713-8550-6cac706db73f-kube-api-access-c9kll\") pod \"catalog-operator-68c6474976-qp9bm\" (UID: \"7af2ddd8-b028-4713-8550-6cac706db73f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.586722 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e628ec8-31d7-43de-9c56-58f049dd8935-trusted-ca\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.586745 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b700b649-4899-457c-afe5-575cf4a8907e-service-ca-bundle\") pod \"router-default-5444994796-2dhzm\" (UID: \"b700b649-4899-457c-afe5-575cf4a8907e\") " pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.586812 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/965a4f3a-6e82-46eb-964a-4c7f40fdaf0e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-t8g5k\" (UID: \"965a4f3a-6e82-46eb-964a-4c7f40fdaf0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t8g5k" Feb 16 00:08:53 crc kubenswrapper[4698]: E0216 00:08:53.592702 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:54.09259739 +0000 UTC m=+143.750496352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.616879 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.619941 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.635280 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b76p7" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.645761 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bsd9j" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.656717 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.666565 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29520000-k87pz" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.671535 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4j499" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.681497 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bhfmf" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.726157 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.726441 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/291f913a-6566-409f-8663-e2f695edf9a6-socket-dir\") pod \"csi-hostpathplugin-rcxmx\" (UID: \"291f913a-6566-409f-8663-e2f695edf9a6\") " pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.726474 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e628ec8-31d7-43de-9c56-58f049dd8935-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.726511 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/396cf171-d75d-4c04-9211-098223756513-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-g9bpw\" (UID: \"396cf171-d75d-4c04-9211-098223756513\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g9bpw" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.726531 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t5l5\" (UniqueName: \"kubernetes.io/projected/1791eeb4-5348-4fe9-9cea-0eba0d00c869-kube-api-access-6t5l5\") pod \"multus-admission-controller-857f4d67dd-xt8xd\" (UID: \"1791eeb4-5348-4fe9-9cea-0eba0d00c869\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xt8xd" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.726553 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e628ec8-31d7-43de-9c56-58f049dd8935-bound-sa-token\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.726574 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1-trusted-ca\") pod \"ingress-operator-5b745b69d9-nvkhl\" (UID: \"d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.726595 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76dae780-f238-4e3f-9e31-38a47f1e1991-proxy-tls\") pod \"machine-config-controller-84d6567774-xks74\" (UID: \"76dae780-f238-4e3f-9e31-38a47f1e1991\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xks74" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.726645 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9679204c-d5b6-489d-9f27-d84d360284ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wdfct\" (UID: \"9679204c-d5b6-489d-9f27-d84d360284ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.726677 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b89dcc24-5331-4c05-9a27-5f4415a7faf1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dmvbr\" (UID: \"b89dcc24-5331-4c05-9a27-5f4415a7faf1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dmvbr" Feb 16 00:08:53 crc kubenswrapper[4698]: E0216 00:08:53.726718 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:54.226684673 +0000 UTC m=+143.884583475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.726771 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/291f913a-6566-409f-8663-e2f695edf9a6-csi-data-dir\") pod \"csi-hostpathplugin-rcxmx\" (UID: \"291f913a-6566-409f-8663-e2f695edf9a6\") " pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.726818 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a30a526-d4c0-437f-a4e5-97b8a4dd32cf-config\") pod \"etcd-operator-b45778765-6wzm8\" (UID: \"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.726853 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2162c872-a146-4e1c-8173-103f521103f0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jbxnw\" (UID: \"2162c872-a146-4e1c-8173-103f521103f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbxnw" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.726877 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/291f913a-6566-409f-8663-e2f695edf9a6-plugins-dir\") pod \"csi-hostpathplugin-rcxmx\" (UID: \"291f913a-6566-409f-8663-e2f695edf9a6\") " pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.726904 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhhng\" (UniqueName: \"kubernetes.io/projected/f90bba31-5d41-4e72-86b5-2b208c15fd27-kube-api-access-vhhng\") pod \"machine-config-operator-74547568cd-d4pz6\" (UID: \"f90bba31-5d41-4e72-86b5-2b208c15fd27\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.726929 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/291f913a-6566-409f-8663-e2f695edf9a6-registration-dir\") pod \"csi-hostpathplugin-rcxmx\" (UID: \"291f913a-6566-409f-8663-e2f695edf9a6\") " pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.726967 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2162c872-a146-4e1c-8173-103f521103f0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jbxnw\" (UID: \"2162c872-a146-4e1c-8173-103f521103f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbxnw" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727010 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2abff28e-c9b9-41b3-be02-5876c5e4b91d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-99zqn\" (UID: \"2abff28e-c9b9-41b3-be02-5876c5e4b91d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727037 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f90bba31-5d41-4e72-86b5-2b208c15fd27-images\") pod \"machine-config-operator-74547568cd-d4pz6\" (UID: \"f90bba31-5d41-4e72-86b5-2b208c15fd27\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727061 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b92s\" (UniqueName: \"kubernetes.io/projected/13e3631d-31f8-4769-b931-22e47b7f9e14-kube-api-access-2b92s\") pod \"packageserver-d55dfcdfc-8hk54\" (UID: \"13e3631d-31f8-4769-b931-22e47b7f9e14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727088 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7af2ddd8-b028-4713-8550-6cac706db73f-profile-collector-cert\") pod \"catalog-operator-68c6474976-qp9bm\" (UID: \"7af2ddd8-b028-4713-8550-6cac706db73f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727113 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/981e423b-1add-418e-bd9b-01d92d9a66cf-serving-cert\") pod \"service-ca-operator-777779d784-cbn44\" (UID: \"981e423b-1add-418e-bd9b-01d92d9a66cf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cbn44" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727145 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9kll\" (UniqueName: \"kubernetes.io/projected/7af2ddd8-b028-4713-8550-6cac706db73f-kube-api-access-c9kll\") pod \"catalog-operator-68c6474976-qp9bm\" (UID: \"7af2ddd8-b028-4713-8550-6cac706db73f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727194 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e628ec8-31d7-43de-9c56-58f049dd8935-trusted-ca\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727222 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b700b649-4899-457c-afe5-575cf4a8907e-service-ca-bundle\") pod \"router-default-5444994796-2dhzm\" (UID: \"b700b649-4899-457c-afe5-575cf4a8907e\") " pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727259 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/965a4f3a-6e82-46eb-964a-4c7f40fdaf0e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-t8g5k\" (UID: \"965a4f3a-6e82-46eb-964a-4c7f40fdaf0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t8g5k" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727288 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8btd\" (UniqueName: \"kubernetes.io/projected/5a30a526-d4c0-437f-a4e5-97b8a4dd32cf-kube-api-access-k8btd\") pod \"etcd-operator-b45778765-6wzm8\" (UID: \"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727312 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjz67\" (UniqueName: \"kubernetes.io/projected/2162c872-a146-4e1c-8173-103f521103f0-kube-api-access-gjz67\") pod \"openshift-apiserver-operator-796bbdcf4f-jbxnw\" (UID: \"2162c872-a146-4e1c-8173-103f521103f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbxnw" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727352 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7292f832-f19e-49fb-9d13-7af3c57d876c-signing-key\") pod \"service-ca-9c57cc56f-g6jpt\" (UID: \"7292f832-f19e-49fb-9d13-7af3c57d876c\") " pod="openshift-service-ca/service-ca-9c57cc56f-g6jpt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727394 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3361de5c-c22b-46a4-b354-13c3e16a3d78-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t8z8b\" (UID: \"3361de5c-c22b-46a4-b354-13c3e16a3d78\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t8z8b" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727417 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1-metrics-tls\") pod \"ingress-operator-5b745b69d9-nvkhl\" (UID: \"d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727441 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a30a526-d4c0-437f-a4e5-97b8a4dd32cf-serving-cert\") pod \"etcd-operator-b45778765-6wzm8\" (UID: \"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727486 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7292f832-f19e-49fb-9d13-7af3c57d876c-signing-cabundle\") pod \"service-ca-9c57cc56f-g6jpt\" (UID: \"7292f832-f19e-49fb-9d13-7af3c57d876c\") " pod="openshift-service-ca/service-ca-9c57cc56f-g6jpt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727519 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396cf171-d75d-4c04-9211-098223756513-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-g9bpw\" (UID: \"396cf171-d75d-4c04-9211-098223756513\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g9bpw" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727544 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c55z7\" (UniqueName: \"kubernetes.io/projected/291f913a-6566-409f-8663-e2f695edf9a6-kube-api-access-c55z7\") pod \"csi-hostpathplugin-rcxmx\" (UID: \"291f913a-6566-409f-8663-e2f695edf9a6\") " pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727569 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7af2ddd8-b028-4713-8550-6cac706db73f-srv-cert\") pod \"catalog-operator-68c6474976-qp9bm\" (UID: \"7af2ddd8-b028-4713-8550-6cac706db73f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727590 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgtb9\" (UniqueName: \"kubernetes.io/projected/b23a39ad-fdae-41b5-bc81-88ebee430bfb-kube-api-access-zgtb9\") pod \"ingress-canary-dc98z\" (UID: \"b23a39ad-fdae-41b5-bc81-88ebee430bfb\") " pod="openshift-ingress-canary/ingress-canary-dc98z" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727656 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2xcl\" (UniqueName: \"kubernetes.io/projected/981e423b-1add-418e-bd9b-01d92d9a66cf-kube-api-access-h2xcl\") pod \"service-ca-operator-777779d784-cbn44\" (UID: \"981e423b-1add-418e-bd9b-01d92d9a66cf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cbn44" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727703 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjctd\" (UniqueName: \"kubernetes.io/projected/ec24b0ba-9563-4228-af90-7774e49f5505-kube-api-access-wjctd\") pod \"downloads-7954f5f757-4qvcf\" (UID: \"ec24b0ba-9563-4228-af90-7774e49f5505\") " pod="openshift-console/downloads-7954f5f757-4qvcf" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727733 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f90bba31-5d41-4e72-86b5-2b208c15fd27-proxy-tls\") pod \"machine-config-operator-74547568cd-d4pz6\" (UID: \"f90bba31-5d41-4e72-86b5-2b208c15fd27\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727762 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp48f\" (UniqueName: \"kubernetes.io/projected/051b67fb-7c0f-4bb5-b1e4-c0ce4708c665-kube-api-access-pp48f\") pod \"machine-config-server-l55t7\" (UID: \"051b67fb-7c0f-4bb5-b1e4-c0ce4708c665\") " pod="openshift-machine-config-operator/machine-config-server-l55t7" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727795 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96-config-volume\") pod \"collect-profiles-29520000-bqmjt\" (UID: \"d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727824 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9679204c-d5b6-489d-9f27-d84d360284ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wdfct\" (UID: \"9679204c-d5b6-489d-9f27-d84d360284ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727860 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/13e3631d-31f8-4769-b931-22e47b7f9e14-tmpfs\") pod \"packageserver-d55dfcdfc-8hk54\" (UID: \"13e3631d-31f8-4769-b931-22e47b7f9e14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727892 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13e3631d-31f8-4769-b931-22e47b7f9e14-webhook-cert\") pod \"packageserver-d55dfcdfc-8hk54\" (UID: \"13e3631d-31f8-4769-b931-22e47b7f9e14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727921 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8svb\" (UniqueName: \"kubernetes.io/projected/2abff28e-c9b9-41b3-be02-5876c5e4b91d-kube-api-access-k8svb\") pod \"cluster-image-registry-operator-dc59b4c8b-99zqn\" (UID: \"2abff28e-c9b9-41b3-be02-5876c5e4b91d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.727963 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbkvz\" (UniqueName: \"kubernetes.io/projected/78a2ec7b-781f-4ec4-8920-377f90b037fb-kube-api-access-lbkvz\") pod \"package-server-manager-789f6589d5-cmfwl\" (UID: \"78a2ec7b-781f-4ec4-8920-377f90b037fb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cmfwl" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728002 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/36efba2c-72ec-4f18-b467-4dc7f2245de9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5sr9r\" (UID: \"36efba2c-72ec-4f18-b467-4dc7f2245de9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5sr9r" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728035 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nvkhl\" (UID: \"d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728061 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ca33a8b-6e95-4a0f-9892-53b18a92b078-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zdlqd\" (UID: \"3ca33a8b-6e95-4a0f-9892-53b18a92b078\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zdlqd" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728099 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv8qg\" (UniqueName: \"kubernetes.io/projected/260201f8-5e3b-4ba4-a945-13fe10a8ad3a-kube-api-access-gv8qg\") pod \"migrator-59844c95c7-fh5qr\" (UID: \"260201f8-5e3b-4ba4-a945-13fe10a8ad3a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fh5qr" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728127 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfnfc\" (UniqueName: \"kubernetes.io/projected/965a4f3a-6e82-46eb-964a-4c7f40fdaf0e-kube-api-access-zfnfc\") pod \"kube-storage-version-migrator-operator-b67b599dd-t8g5k\" (UID: \"965a4f3a-6e82-46eb-964a-4c7f40fdaf0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t8g5k" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728159 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728185 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63fb50bd-c7be-4229-80c6-26017b6bac3b-metrics-tls\") pod \"dns-operator-744455d44c-db2vn\" (UID: \"63fb50bd-c7be-4229-80c6-26017b6bac3b\") " pod="openshift-dns-operator/dns-operator-744455d44c-db2vn" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728218 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq895\" (UniqueName: \"kubernetes.io/projected/d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96-kube-api-access-nq895\") pod \"collect-profiles-29520000-bqmjt\" (UID: \"d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728241 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/981e423b-1add-418e-bd9b-01d92d9a66cf-config\") pod \"service-ca-operator-777779d784-cbn44\" (UID: \"981e423b-1add-418e-bd9b-01d92d9a66cf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cbn44" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728267 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/051b67fb-7c0f-4bb5-b1e4-c0ce4708c665-certs\") pod \"machine-config-server-l55t7\" (UID: \"051b67fb-7c0f-4bb5-b1e4-c0ce4708c665\") " pod="openshift-machine-config-operator/machine-config-server-l55t7" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728294 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbcr5\" (UniqueName: \"kubernetes.io/projected/b700b649-4899-457c-afe5-575cf4a8907e-kube-api-access-nbcr5\") pod \"router-default-5444994796-2dhzm\" (UID: \"b700b649-4899-457c-afe5-575cf4a8907e\") " pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728333 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76dae780-f238-4e3f-9e31-38a47f1e1991-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xks74\" (UID: \"76dae780-f238-4e3f-9e31-38a47f1e1991\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xks74" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728360 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/291f913a-6566-409f-8663-e2f695edf9a6-mountpoint-dir\") pod \"csi-hostpathplugin-rcxmx\" (UID: \"291f913a-6566-409f-8663-e2f695edf9a6\") " pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728395 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e628ec8-31d7-43de-9c56-58f049dd8935-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728420 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9szx\" (UniqueName: \"kubernetes.io/projected/7292f832-f19e-49fb-9d13-7af3c57d876c-kube-api-access-d9szx\") pod \"service-ca-9c57cc56f-g6jpt\" (UID: \"7292f832-f19e-49fb-9d13-7af3c57d876c\") " pod="openshift-service-ca/service-ca-9c57cc56f-g6jpt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728447 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e628ec8-31d7-43de-9c56-58f049dd8935-registry-tls\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728471 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5a30a526-d4c0-437f-a4e5-97b8a4dd32cf-etcd-ca\") pod \"etcd-operator-b45778765-6wzm8\" (UID: \"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728502 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/78a2ec7b-781f-4ec4-8920-377f90b037fb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cmfwl\" (UID: \"78a2ec7b-781f-4ec4-8920-377f90b037fb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cmfwl" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728535 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2abff28e-c9b9-41b3-be02-5876c5e4b91d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-99zqn\" (UID: \"2abff28e-c9b9-41b3-be02-5876c5e4b91d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728560 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm8cq\" (UniqueName: \"kubernetes.io/projected/396cf171-d75d-4c04-9211-098223756513-kube-api-access-vm8cq\") pod \"openshift-controller-manager-operator-756b6f6bc6-g9bpw\" (UID: \"396cf171-d75d-4c04-9211-098223756513\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g9bpw" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728592 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnncf\" (UniqueName: \"kubernetes.io/projected/9e628ec8-31d7-43de-9c56-58f049dd8935-kube-api-access-qnncf\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728643 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96-secret-volume\") pod \"collect-profiles-29520000-bqmjt\" (UID: \"d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.728711 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b700b649-4899-457c-afe5-575cf4a8907e-stats-auth\") pod \"router-default-5444994796-2dhzm\" (UID: \"b700b649-4899-457c-afe5-575cf4a8907e\") " pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.729424 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e628ec8-31d7-43de-9c56-58f049dd8935-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.729562 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396cf171-d75d-4c04-9211-098223756513-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-g9bpw\" (UID: \"396cf171-d75d-4c04-9211-098223756513\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g9bpw" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.730307 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/051b67fb-7c0f-4bb5-b1e4-c0ce4708c665-node-bootstrap-token\") pod \"machine-config-server-l55t7\" (UID: \"051b67fb-7c0f-4bb5-b1e4-c0ce4708c665\") " pod="openshift-machine-config-operator/machine-config-server-l55t7" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.730351 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13e3631d-31f8-4769-b931-22e47b7f9e14-apiservice-cert\") pod \"packageserver-d55dfcdfc-8hk54\" (UID: \"13e3631d-31f8-4769-b931-22e47b7f9e14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.730389 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2162c872-a146-4e1c-8173-103f521103f0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jbxnw\" (UID: \"2162c872-a146-4e1c-8173-103f521103f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbxnw" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.731174 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96-config-volume\") pod \"collect-profiles-29520000-bqmjt\" (UID: \"d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.731239 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ca33a8b-6e95-4a0f-9892-53b18a92b078-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zdlqd\" (UID: \"3ca33a8b-6e95-4a0f-9892-53b18a92b078\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zdlqd" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.731287 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e628ec8-31d7-43de-9c56-58f049dd8935-registry-certificates\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.731318 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a30a526-d4c0-437f-a4e5-97b8a4dd32cf-etcd-service-ca\") pod \"etcd-operator-b45778765-6wzm8\" (UID: \"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.732548 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a30a526-d4c0-437f-a4e5-97b8a4dd32cf-config\") pod \"etcd-operator-b45778765-6wzm8\" (UID: \"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.733223 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2abff28e-c9b9-41b3-be02-5876c5e4b91d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-99zqn\" (UID: \"2abff28e-c9b9-41b3-be02-5876c5e4b91d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.733759 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f90bba31-5d41-4e72-86b5-2b208c15fd27-images\") pod \"machine-config-operator-74547568cd-d4pz6\" (UID: \"f90bba31-5d41-4e72-86b5-2b208c15fd27\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6" Feb 16 00:08:53 crc kubenswrapper[4698]: E0216 00:08:53.737861 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:54.237833702 +0000 UTC m=+143.895732554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.738388 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a30a526-d4c0-437f-a4e5-97b8a4dd32cf-etcd-service-ca\") pod \"etcd-operator-b45778765-6wzm8\" (UID: \"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.738593 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/981e423b-1add-418e-bd9b-01d92d9a66cf-config\") pod \"service-ca-operator-777779d784-cbn44\" (UID: \"981e423b-1add-418e-bd9b-01d92d9a66cf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cbn44" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.738821 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e628ec8-31d7-43de-9c56-58f049dd8935-registry-certificates\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.738933 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b700b649-4899-457c-afe5-575cf4a8907e-default-certificate\") pod \"router-default-5444994796-2dhzm\" (UID: \"b700b649-4899-457c-afe5-575cf4a8907e\") " pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.739091 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/965a4f3a-6e82-46eb-964a-4c7f40fdaf0e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-t8g5k\" (UID: \"965a4f3a-6e82-46eb-964a-4c7f40fdaf0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t8g5k" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.739747 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5a30a526-d4c0-437f-a4e5-97b8a4dd32cf-etcd-ca\") pod \"etcd-operator-b45778765-6wzm8\" (UID: \"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.739886 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b700b649-4899-457c-afe5-575cf4a8907e-service-ca-bundle\") pod \"router-default-5444994796-2dhzm\" (UID: \"b700b649-4899-457c-afe5-575cf4a8907e\") " pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.740023 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-v8kwd" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.740361 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e628ec8-31d7-43de-9c56-58f049dd8935-trusted-ca\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.740969 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1-trusted-ca\") pod \"ingress-operator-5b745b69d9-nvkhl\" (UID: \"d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.742393 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f90bba31-5d41-4e72-86b5-2b208c15fd27-auth-proxy-config\") pod \"machine-config-operator-74547568cd-d4pz6\" (UID: \"f90bba31-5d41-4e72-86b5-2b208c15fd27\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.742429 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3361de5c-c22b-46a4-b354-13c3e16a3d78-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t8z8b\" (UID: \"3361de5c-c22b-46a4-b354-13c3e16a3d78\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t8z8b" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.742488 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2abff28e-c9b9-41b3-be02-5876c5e4b91d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-99zqn\" (UID: \"2abff28e-c9b9-41b3-be02-5876c5e4b91d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.742509 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1791eeb4-5348-4fe9-9cea-0eba0d00c869-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xt8xd\" (UID: \"1791eeb4-5348-4fe9-9cea-0eba0d00c869\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xt8xd" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.743095 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3361de5c-c22b-46a4-b354-13c3e16a3d78-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t8z8b\" (UID: \"3361de5c-c22b-46a4-b354-13c3e16a3d78\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t8z8b" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.743722 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f90bba31-5d41-4e72-86b5-2b208c15fd27-auth-proxy-config\") pod \"machine-config-operator-74547568cd-d4pz6\" (UID: \"f90bba31-5d41-4e72-86b5-2b208c15fd27\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.743803 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5smm\" (UniqueName: \"kubernetes.io/projected/36efba2c-72ec-4f18-b467-4dc7f2245de9-kube-api-access-s5smm\") pod \"olm-operator-6b444d44fb-5sr9r\" (UID: \"36efba2c-72ec-4f18-b467-4dc7f2245de9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5sr9r" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.743910 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/965a4f3a-6e82-46eb-964a-4c7f40fdaf0e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-t8g5k\" (UID: \"965a4f3a-6e82-46eb-964a-4c7f40fdaf0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t8g5k" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.743981 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3361de5c-c22b-46a4-b354-13c3e16a3d78-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t8z8b\" (UID: \"3361de5c-c22b-46a4-b354-13c3e16a3d78\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t8z8b" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.744009 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/36efba2c-72ec-4f18-b467-4dc7f2245de9-srv-cert\") pod \"olm-operator-6b444d44fb-5sr9r\" (UID: \"36efba2c-72ec-4f18-b467-4dc7f2245de9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5sr9r" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.744044 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0aa34695-49b9-4a9f-bec0-db46d80d3f64-metrics-tls\") pod \"dns-default-ngf8q\" (UID: \"0aa34695-49b9-4a9f-bec0-db46d80d3f64\") " pod="openshift-dns/dns-default-ngf8q" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.744108 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctvvr\" (UniqueName: \"kubernetes.io/projected/0aa34695-49b9-4a9f-bec0-db46d80d3f64-kube-api-access-ctvvr\") pod \"dns-default-ngf8q\" (UID: \"0aa34695-49b9-4a9f-bec0-db46d80d3f64\") " pod="openshift-dns/dns-default-ngf8q" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.744136 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0aa34695-49b9-4a9f-bec0-db46d80d3f64-config-volume\") pod \"dns-default-ngf8q\" (UID: \"0aa34695-49b9-4a9f-bec0-db46d80d3f64\") " pod="openshift-dns/dns-default-ngf8q" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.744168 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqgww\" (UniqueName: \"kubernetes.io/projected/b89dcc24-5331-4c05-9a27-5f4415a7faf1-kube-api-access-rqgww\") pod \"control-plane-machine-set-operator-78cbb6b69f-dmvbr\" (UID: \"b89dcc24-5331-4c05-9a27-5f4415a7faf1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dmvbr" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.749283 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b23a39ad-fdae-41b5-bc81-88ebee430bfb-cert\") pod \"ingress-canary-dc98z\" (UID: \"b23a39ad-fdae-41b5-bc81-88ebee430bfb\") " pod="openshift-ingress-canary/ingress-canary-dc98z" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.749427 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ca33a8b-6e95-4a0f-9892-53b18a92b078-config\") pod \"kube-apiserver-operator-766d6c64bb-zdlqd\" (UID: \"3ca33a8b-6e95-4a0f-9892-53b18a92b078\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zdlqd" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.750239 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ca33a8b-6e95-4a0f-9892-53b18a92b078-config\") pod \"kube-apiserver-operator-766d6c64bb-zdlqd\" (UID: \"3ca33a8b-6e95-4a0f-9892-53b18a92b078\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zdlqd" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.754021 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlbvc\" (UniqueName: \"kubernetes.io/projected/d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1-kube-api-access-jlbvc\") pod \"ingress-operator-5b745b69d9-nvkhl\" (UID: \"d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.754219 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz4lh\" (UniqueName: \"kubernetes.io/projected/76dae780-f238-4e3f-9e31-38a47f1e1991-kube-api-access-rz4lh\") pod \"machine-config-controller-84d6567774-xks74\" (UID: \"76dae780-f238-4e3f-9e31-38a47f1e1991\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xks74" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.754372 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg8r6\" (UniqueName: \"kubernetes.io/projected/63fb50bd-c7be-4229-80c6-26017b6bac3b-kube-api-access-jg8r6\") pod \"dns-operator-744455d44c-db2vn\" (UID: \"63fb50bd-c7be-4229-80c6-26017b6bac3b\") " pod="openshift-dns-operator/dns-operator-744455d44c-db2vn" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.754414 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b700b649-4899-457c-afe5-575cf4a8907e-metrics-certs\") pod \"router-default-5444994796-2dhzm\" (UID: \"b700b649-4899-457c-afe5-575cf4a8907e\") " pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.754476 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz2w4\" (UniqueName: \"kubernetes.io/projected/9679204c-d5b6-489d-9f27-d84d360284ae-kube-api-access-qz2w4\") pod \"marketplace-operator-79b997595-wdfct\" (UID: \"9679204c-d5b6-489d-9f27-d84d360284ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.754550 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5a30a526-d4c0-437f-a4e5-97b8a4dd32cf-etcd-client\") pod \"etcd-operator-b45778765-6wzm8\" (UID: \"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.762705 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.769008 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b700b649-4899-457c-afe5-575cf4a8907e-stats-auth\") pod \"router-default-5444994796-2dhzm\" (UID: \"b700b649-4899-457c-afe5-575cf4a8907e\") " pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.769523 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2abff28e-c9b9-41b3-be02-5876c5e4b91d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-99zqn\" (UID: \"2abff28e-c9b9-41b3-be02-5876c5e4b91d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.770707 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/36efba2c-72ec-4f18-b467-4dc7f2245de9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5sr9r\" (UID: \"36efba2c-72ec-4f18-b467-4dc7f2245de9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5sr9r" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.771174 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1791eeb4-5348-4fe9-9cea-0eba0d00c869-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xt8xd\" (UID: \"1791eeb4-5348-4fe9-9cea-0eba0d00c869\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xt8xd" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.771187 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3361de5c-c22b-46a4-b354-13c3e16a3d78-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t8z8b\" (UID: \"3361de5c-c22b-46a4-b354-13c3e16a3d78\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t8z8b" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.771276 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/396cf171-d75d-4c04-9211-098223756513-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-g9bpw\" (UID: \"396cf171-d75d-4c04-9211-098223756513\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g9bpw" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.772320 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ca33a8b-6e95-4a0f-9892-53b18a92b078-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zdlqd\" (UID: \"3ca33a8b-6e95-4a0f-9892-53b18a92b078\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zdlqd" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.772404 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1-metrics-tls\") pod \"ingress-operator-5b745b69d9-nvkhl\" (UID: \"d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.772761 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e628ec8-31d7-43de-9c56-58f049dd8935-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.772828 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/965a4f3a-6e82-46eb-964a-4c7f40fdaf0e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-t8g5k\" (UID: \"965a4f3a-6e82-46eb-964a-4c7f40fdaf0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t8g5k" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.773080 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e628ec8-31d7-43de-9c56-58f049dd8935-bound-sa-token\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.773833 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63fb50bd-c7be-4229-80c6-26017b6bac3b-metrics-tls\") pod \"dns-operator-744455d44c-db2vn\" (UID: \"63fb50bd-c7be-4229-80c6-26017b6bac3b\") " pod="openshift-dns-operator/dns-operator-744455d44c-db2vn" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.774169 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b700b649-4899-457c-afe5-575cf4a8907e-default-certificate\") pod \"router-default-5444994796-2dhzm\" (UID: \"b700b649-4899-457c-afe5-575cf4a8907e\") " pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.774519 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f90bba31-5d41-4e72-86b5-2b208c15fd27-proxy-tls\") pod \"machine-config-operator-74547568cd-d4pz6\" (UID: \"f90bba31-5d41-4e72-86b5-2b208c15fd27\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.774594 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/981e423b-1add-418e-bd9b-01d92d9a66cf-serving-cert\") pod \"service-ca-operator-777779d784-cbn44\" (UID: \"981e423b-1add-418e-bd9b-01d92d9a66cf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cbn44" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.775758 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7af2ddd8-b028-4713-8550-6cac706db73f-profile-collector-cert\") pod \"catalog-operator-68c6474976-qp9bm\" (UID: \"7af2ddd8-b028-4713-8550-6cac706db73f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.776138 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e628ec8-31d7-43de-9c56-58f049dd8935-registry-tls\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.776597 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5a30a526-d4c0-437f-a4e5-97b8a4dd32cf-etcd-client\") pod \"etcd-operator-b45778765-6wzm8\" (UID: \"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.777027 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96-secret-volume\") pod \"collect-profiles-29520000-bqmjt\" (UID: \"d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.777248 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b700b649-4899-457c-afe5-575cf4a8907e-metrics-certs\") pod \"router-default-5444994796-2dhzm\" (UID: \"b700b649-4899-457c-afe5-575cf4a8907e\") " pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.777640 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7af2ddd8-b028-4713-8550-6cac706db73f-srv-cert\") pod \"catalog-operator-68c6474976-qp9bm\" (UID: \"7af2ddd8-b028-4713-8550-6cac706db73f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.778724 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2162c872-a146-4e1c-8173-103f521103f0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jbxnw\" (UID: \"2162c872-a146-4e1c-8173-103f521103f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbxnw" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.778793 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b89dcc24-5331-4c05-9a27-5f4415a7faf1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dmvbr\" (UID: \"b89dcc24-5331-4c05-9a27-5f4415a7faf1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dmvbr" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.780075 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a30a526-d4c0-437f-a4e5-97b8a4dd32cf-serving-cert\") pod \"etcd-operator-b45778765-6wzm8\" (UID: \"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.780460 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/36efba2c-72ec-4f18-b467-4dc7f2245de9-srv-cert\") pod \"olm-operator-6b444d44fb-5sr9r\" (UID: \"36efba2c-72ec-4f18-b467-4dc7f2245de9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5sr9r" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.783061 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2xcl\" (UniqueName: \"kubernetes.io/projected/981e423b-1add-418e-bd9b-01d92d9a66cf-kube-api-access-h2xcl\") pod \"service-ca-operator-777779d784-cbn44\" (UID: \"981e423b-1add-418e-bd9b-01d92d9a66cf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cbn44" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.783577 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t5l5\" (UniqueName: \"kubernetes.io/projected/1791eeb4-5348-4fe9-9cea-0eba0d00c869-kube-api-access-6t5l5\") pod \"multus-admission-controller-857f4d67dd-xt8xd\" (UID: \"1791eeb4-5348-4fe9-9cea-0eba0d00c869\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xt8xd" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.796440 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.799425 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8svb\" (UniqueName: \"kubernetes.io/projected/2abff28e-c9b9-41b3-be02-5876c5e4b91d-kube-api-access-k8svb\") pod \"cluster-image-registry-operator-dc59b4c8b-99zqn\" (UID: \"2abff28e-c9b9-41b3-be02-5876c5e4b91d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.830194 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbcr5\" (UniqueName: \"kubernetes.io/projected/b700b649-4899-457c-afe5-575cf4a8907e-kube-api-access-nbcr5\") pod \"router-default-5444994796-2dhzm\" (UID: \"b700b649-4899-457c-afe5-575cf4a8907e\") " pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.837790 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjz67\" (UniqueName: \"kubernetes.io/projected/2162c872-a146-4e1c-8173-103f521103f0-kube-api-access-gjz67\") pod \"openshift-apiserver-operator-796bbdcf4f-jbxnw\" (UID: \"2162c872-a146-4e1c-8173-103f521103f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbxnw" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.838278 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.860739 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhhng\" (UniqueName: \"kubernetes.io/projected/f90bba31-5d41-4e72-86b5-2b208c15fd27-kube-api-access-vhhng\") pod \"machine-config-operator-74547568cd-d4pz6\" (UID: \"f90bba31-5d41-4e72-86b5-2b208c15fd27\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.861326 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xt8xd" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864101 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864396 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/291f913a-6566-409f-8663-e2f695edf9a6-socket-dir\") pod \"csi-hostpathplugin-rcxmx\" (UID: \"291f913a-6566-409f-8663-e2f695edf9a6\") " pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864426 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76dae780-f238-4e3f-9e31-38a47f1e1991-proxy-tls\") pod \"machine-config-controller-84d6567774-xks74\" (UID: \"76dae780-f238-4e3f-9e31-38a47f1e1991\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xks74" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864446 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9679204c-d5b6-489d-9f27-d84d360284ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wdfct\" (UID: \"9679204c-d5b6-489d-9f27-d84d360284ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864468 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/291f913a-6566-409f-8663-e2f695edf9a6-csi-data-dir\") pod \"csi-hostpathplugin-rcxmx\" (UID: \"291f913a-6566-409f-8663-e2f695edf9a6\") " pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864507 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/291f913a-6566-409f-8663-e2f695edf9a6-plugins-dir\") pod \"csi-hostpathplugin-rcxmx\" (UID: \"291f913a-6566-409f-8663-e2f695edf9a6\") " pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864525 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/291f913a-6566-409f-8663-e2f695edf9a6-registration-dir\") pod \"csi-hostpathplugin-rcxmx\" (UID: \"291f913a-6566-409f-8663-e2f695edf9a6\") " pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864550 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b92s\" (UniqueName: \"kubernetes.io/projected/13e3631d-31f8-4769-b931-22e47b7f9e14-kube-api-access-2b92s\") pod \"packageserver-d55dfcdfc-8hk54\" (UID: \"13e3631d-31f8-4769-b931-22e47b7f9e14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864678 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7292f832-f19e-49fb-9d13-7af3c57d876c-signing-key\") pod \"service-ca-9c57cc56f-g6jpt\" (UID: \"7292f832-f19e-49fb-9d13-7af3c57d876c\") " pod="openshift-service-ca/service-ca-9c57cc56f-g6jpt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864704 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7292f832-f19e-49fb-9d13-7af3c57d876c-signing-cabundle\") pod \"service-ca-9c57cc56f-g6jpt\" (UID: \"7292f832-f19e-49fb-9d13-7af3c57d876c\") " pod="openshift-service-ca/service-ca-9c57cc56f-g6jpt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864720 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c55z7\" (UniqueName: \"kubernetes.io/projected/291f913a-6566-409f-8663-e2f695edf9a6-kube-api-access-c55z7\") pod \"csi-hostpathplugin-rcxmx\" (UID: \"291f913a-6566-409f-8663-e2f695edf9a6\") " pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864737 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgtb9\" (UniqueName: \"kubernetes.io/projected/b23a39ad-fdae-41b5-bc81-88ebee430bfb-kube-api-access-zgtb9\") pod \"ingress-canary-dc98z\" (UID: \"b23a39ad-fdae-41b5-bc81-88ebee430bfb\") " pod="openshift-ingress-canary/ingress-canary-dc98z" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864771 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp48f\" (UniqueName: \"kubernetes.io/projected/051b67fb-7c0f-4bb5-b1e4-c0ce4708c665-kube-api-access-pp48f\") pod \"machine-config-server-l55t7\" (UID: \"051b67fb-7c0f-4bb5-b1e4-c0ce4708c665\") " pod="openshift-machine-config-operator/machine-config-server-l55t7" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864794 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9679204c-d5b6-489d-9f27-d84d360284ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wdfct\" (UID: \"9679204c-d5b6-489d-9f27-d84d360284ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864817 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/13e3631d-31f8-4769-b931-22e47b7f9e14-tmpfs\") pod \"packageserver-d55dfcdfc-8hk54\" (UID: \"13e3631d-31f8-4769-b931-22e47b7f9e14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864834 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13e3631d-31f8-4769-b931-22e47b7f9e14-webhook-cert\") pod \"packageserver-d55dfcdfc-8hk54\" (UID: \"13e3631d-31f8-4769-b931-22e47b7f9e14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864859 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbkvz\" (UniqueName: \"kubernetes.io/projected/78a2ec7b-781f-4ec4-8920-377f90b037fb-kube-api-access-lbkvz\") pod \"package-server-manager-789f6589d5-cmfwl\" (UID: \"78a2ec7b-781f-4ec4-8920-377f90b037fb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cmfwl" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864945 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/051b67fb-7c0f-4bb5-b1e4-c0ce4708c665-certs\") pod \"machine-config-server-l55t7\" (UID: \"051b67fb-7c0f-4bb5-b1e4-c0ce4708c665\") " pod="openshift-machine-config-operator/machine-config-server-l55t7" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864967 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/291f913a-6566-409f-8663-e2f695edf9a6-mountpoint-dir\") pod \"csi-hostpathplugin-rcxmx\" (UID: \"291f913a-6566-409f-8663-e2f695edf9a6\") " pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.864991 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76dae780-f238-4e3f-9e31-38a47f1e1991-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xks74\" (UID: \"76dae780-f238-4e3f-9e31-38a47f1e1991\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xks74" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.865042 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9szx\" (UniqueName: \"kubernetes.io/projected/7292f832-f19e-49fb-9d13-7af3c57d876c-kube-api-access-d9szx\") pod \"service-ca-9c57cc56f-g6jpt\" (UID: \"7292f832-f19e-49fb-9d13-7af3c57d876c\") " pod="openshift-service-ca/service-ca-9c57cc56f-g6jpt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.865067 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/78a2ec7b-781f-4ec4-8920-377f90b037fb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cmfwl\" (UID: \"78a2ec7b-781f-4ec4-8920-377f90b037fb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cmfwl" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.865110 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/051b67fb-7c0f-4bb5-b1e4-c0ce4708c665-node-bootstrap-token\") pod \"machine-config-server-l55t7\" (UID: \"051b67fb-7c0f-4bb5-b1e4-c0ce4708c665\") " pod="openshift-machine-config-operator/machine-config-server-l55t7" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.865133 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13e3631d-31f8-4769-b931-22e47b7f9e14-apiservice-cert\") pod \"packageserver-d55dfcdfc-8hk54\" (UID: \"13e3631d-31f8-4769-b931-22e47b7f9e14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.865175 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0aa34695-49b9-4a9f-bec0-db46d80d3f64-metrics-tls\") pod \"dns-default-ngf8q\" (UID: \"0aa34695-49b9-4a9f-bec0-db46d80d3f64\") " pod="openshift-dns/dns-default-ngf8q" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.865194 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctvvr\" (UniqueName: \"kubernetes.io/projected/0aa34695-49b9-4a9f-bec0-db46d80d3f64-kube-api-access-ctvvr\") pod \"dns-default-ngf8q\" (UID: \"0aa34695-49b9-4a9f-bec0-db46d80d3f64\") " pod="openshift-dns/dns-default-ngf8q" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.865222 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0aa34695-49b9-4a9f-bec0-db46d80d3f64-config-volume\") pod \"dns-default-ngf8q\" (UID: \"0aa34695-49b9-4a9f-bec0-db46d80d3f64\") " pod="openshift-dns/dns-default-ngf8q" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.865242 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b23a39ad-fdae-41b5-bc81-88ebee430bfb-cert\") pod \"ingress-canary-dc98z\" (UID: \"b23a39ad-fdae-41b5-bc81-88ebee430bfb\") " pod="openshift-ingress-canary/ingress-canary-dc98z" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.865267 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz4lh\" (UniqueName: \"kubernetes.io/projected/76dae780-f238-4e3f-9e31-38a47f1e1991-kube-api-access-rz4lh\") pod \"machine-config-controller-84d6567774-xks74\" (UID: \"76dae780-f238-4e3f-9e31-38a47f1e1991\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xks74" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.865282 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz2w4\" (UniqueName: \"kubernetes.io/projected/9679204c-d5b6-489d-9f27-d84d360284ae-kube-api-access-qz2w4\") pod \"marketplace-operator-79b997595-wdfct\" (UID: \"9679204c-d5b6-489d-9f27-d84d360284ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" Feb 16 00:08:53 crc kubenswrapper[4698]: E0216 00:08:53.865537 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:54.365520036 +0000 UTC m=+144.023418798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.865892 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/291f913a-6566-409f-8663-e2f695edf9a6-socket-dir\") pod \"csi-hostpathplugin-rcxmx\" (UID: \"291f913a-6566-409f-8663-e2f695edf9a6\") " pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.870788 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7292f832-f19e-49fb-9d13-7af3c57d876c-signing-key\") pod \"service-ca-9c57cc56f-g6jpt\" (UID: \"7292f832-f19e-49fb-9d13-7af3c57d876c\") " pod="openshift-service-ca/service-ca-9c57cc56f-g6jpt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.874020 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/291f913a-6566-409f-8663-e2f695edf9a6-plugins-dir\") pod \"csi-hostpathplugin-rcxmx\" (UID: \"291f913a-6566-409f-8663-e2f695edf9a6\") " pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.875038 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7292f832-f19e-49fb-9d13-7af3c57d876c-signing-cabundle\") pod \"service-ca-9c57cc56f-g6jpt\" (UID: \"7292f832-f19e-49fb-9d13-7af3c57d876c\") " pod="openshift-service-ca/service-ca-9c57cc56f-g6jpt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.875409 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/051b67fb-7c0f-4bb5-b1e4-c0ce4708c665-certs\") pod \"machine-config-server-l55t7\" (UID: \"051b67fb-7c0f-4bb5-b1e4-c0ce4708c665\") " pod="openshift-machine-config-operator/machine-config-server-l55t7" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.875949 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/291f913a-6566-409f-8663-e2f695edf9a6-mountpoint-dir\") pod \"csi-hostpathplugin-rcxmx\" (UID: \"291f913a-6566-409f-8663-e2f695edf9a6\") " pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.876877 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9679204c-d5b6-489d-9f27-d84d360284ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wdfct\" (UID: \"9679204c-d5b6-489d-9f27-d84d360284ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.876990 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76dae780-f238-4e3f-9e31-38a47f1e1991-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xks74\" (UID: \"76dae780-f238-4e3f-9e31-38a47f1e1991\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xks74" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.877106 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/291f913a-6566-409f-8663-e2f695edf9a6-csi-data-dir\") pod \"csi-hostpathplugin-rcxmx\" (UID: \"291f913a-6566-409f-8663-e2f695edf9a6\") " pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.878144 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13e3631d-31f8-4769-b931-22e47b7f9e14-webhook-cert\") pod \"packageserver-d55dfcdfc-8hk54\" (UID: \"13e3631d-31f8-4769-b931-22e47b7f9e14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.878337 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13e3631d-31f8-4769-b931-22e47b7f9e14-apiservice-cert\") pod \"packageserver-d55dfcdfc-8hk54\" (UID: \"13e3631d-31f8-4769-b931-22e47b7f9e14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.879058 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0aa34695-49b9-4a9f-bec0-db46d80d3f64-config-volume\") pod \"dns-default-ngf8q\" (UID: \"0aa34695-49b9-4a9f-bec0-db46d80d3f64\") " pod="openshift-dns/dns-default-ngf8q" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.879338 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/291f913a-6566-409f-8663-e2f695edf9a6-registration-dir\") pod \"csi-hostpathplugin-rcxmx\" (UID: \"291f913a-6566-409f-8663-e2f695edf9a6\") " pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.880216 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/13e3631d-31f8-4769-b931-22e47b7f9e14-tmpfs\") pod \"packageserver-d55dfcdfc-8hk54\" (UID: \"13e3631d-31f8-4769-b931-22e47b7f9e14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.886142 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/051b67fb-7c0f-4bb5-b1e4-c0ce4708c665-node-bootstrap-token\") pod \"machine-config-server-l55t7\" (UID: \"051b67fb-7c0f-4bb5-b1e4-c0ce4708c665\") " pod="openshift-machine-config-operator/machine-config-server-l55t7" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.887306 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9679204c-d5b6-489d-9f27-d84d360284ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wdfct\" (UID: \"9679204c-d5b6-489d-9f27-d84d360284ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.892668 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0aa34695-49b9-4a9f-bec0-db46d80d3f64-metrics-tls\") pod \"dns-default-ngf8q\" (UID: \"0aa34695-49b9-4a9f-bec0-db46d80d3f64\") " pod="openshift-dns/dns-default-ngf8q" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.892821 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b23a39ad-fdae-41b5-bc81-88ebee430bfb-cert\") pod \"ingress-canary-dc98z\" (UID: \"b23a39ad-fdae-41b5-bc81-88ebee430bfb\") " pod="openshift-ingress-canary/ingress-canary-dc98z" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.893055 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/78a2ec7b-781f-4ec4-8920-377f90b037fb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cmfwl\" (UID: \"78a2ec7b-781f-4ec4-8920-377f90b037fb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cmfwl" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.893135 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm8cq\" (UniqueName: \"kubernetes.io/projected/396cf171-d75d-4c04-9211-098223756513-kube-api-access-vm8cq\") pod \"openshift-controller-manager-operator-756b6f6bc6-g9bpw\" (UID: \"396cf171-d75d-4c04-9211-098223756513\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g9bpw" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.896028 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cbn44" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.899104 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76dae780-f238-4e3f-9e31-38a47f1e1991-proxy-tls\") pod \"machine-config-controller-84d6567774-xks74\" (UID: \"76dae780-f238-4e3f-9e31-38a47f1e1991\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xks74" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.910978 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv8qg\" (UniqueName: \"kubernetes.io/projected/260201f8-5e3b-4ba4-a945-13fe10a8ad3a-kube-api-access-gv8qg\") pod \"migrator-59844c95c7-fh5qr\" (UID: \"260201f8-5e3b-4ba4-a945-13fe10a8ad3a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fh5qr" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.918055 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nvkhl\" (UID: \"d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.918095 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr"] Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.937133 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bg97c"] Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.939533 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ca33a8b-6e95-4a0f-9892-53b18a92b078-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zdlqd\" (UID: \"3ca33a8b-6e95-4a0f-9892-53b18a92b078\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zdlqd" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.955104 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq895\" (UniqueName: \"kubernetes.io/projected/d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96-kube-api-access-nq895\") pod \"collect-profiles-29520000-bqmjt\" (UID: \"d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.967363 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:53 crc kubenswrapper[4698]: E0216 00:08:53.968142 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:54.468120162 +0000 UTC m=+144.126018924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.981643 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfnfc\" (UniqueName: \"kubernetes.io/projected/965a4f3a-6e82-46eb-964a-4c7f40fdaf0e-kube-api-access-zfnfc\") pod \"kube-storage-version-migrator-operator-b67b599dd-t8g5k\" (UID: \"965a4f3a-6e82-46eb-964a-4c7f40fdaf0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t8g5k" Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.988828 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6"] Feb 16 00:08:53 crc kubenswrapper[4698]: I0216 00:08:53.999634 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnncf\" (UniqueName: \"kubernetes.io/projected/9e628ec8-31d7-43de-9c56-58f049dd8935-kube-api-access-qnncf\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:54 crc kubenswrapper[4698]: W0216 00:08:54.004327 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb28deeab_0e2e_4d78_a65e_fbd423e08cf2.slice/crio-e4db63fc60146dbfcf2c99f7fbc64744e09404b7f96c5fb03797a28e0242d85f WatchSource:0}: Error finding container e4db63fc60146dbfcf2c99f7fbc64744e09404b7f96c5fb03797a28e0242d85f: Status 404 returned error can't find the container with id e4db63fc60146dbfcf2c99f7fbc64744e09404b7f96c5fb03797a28e0242d85f Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.017322 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9kll\" (UniqueName: \"kubernetes.io/projected/7af2ddd8-b028-4713-8550-6cac706db73f-kube-api-access-c9kll\") pod \"catalog-operator-68c6474976-qp9bm\" (UID: \"7af2ddd8-b028-4713-8550-6cac706db73f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.042731 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8btd\" (UniqueName: \"kubernetes.io/projected/5a30a526-d4c0-437f-a4e5-97b8a4dd32cf-kube-api-access-k8btd\") pod \"etcd-operator-b45778765-6wzm8\" (UID: \"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.051223 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ckzvr"] Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.068258 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjctd\" (UniqueName: \"kubernetes.io/projected/ec24b0ba-9563-4228-af90-7774e49f5505-kube-api-access-wjctd\") pod \"downloads-7954f5f757-4qvcf\" (UID: \"ec24b0ba-9563-4228-af90-7774e49f5505\") " pod="openshift-console/downloads-7954f5f757-4qvcf" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.069193 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:54 crc kubenswrapper[4698]: E0216 00:08:54.069858 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:54.569840448 +0000 UTC m=+144.227739220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:54 crc kubenswrapper[4698]: W0216 00:08:54.074755 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb700b649_4899_457c_afe5_575cf4a8907e.slice/crio-764465a0cdda0086a8fcc396d0e686b91570d04a770d7891270bb1d7625d1369 WatchSource:0}: Error finding container 764465a0cdda0086a8fcc396d0e686b91570d04a770d7891270bb1d7625d1369: Status 404 returned error can't find the container with id 764465a0cdda0086a8fcc396d0e686b91570d04a770d7891270bb1d7625d1369 Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.077289 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2abff28e-c9b9-41b3-be02-5876c5e4b91d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-99zqn\" (UID: \"2abff28e-c9b9-41b3-be02-5876c5e4b91d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.080890 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g9bpw" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.087600 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9vkwc"] Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.087792 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.096177 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t8g5k" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.096444 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3361de5c-c22b-46a4-b354-13c3e16a3d78-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t8z8b\" (UID: \"3361de5c-c22b-46a4-b354-13c3e16a3d78\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t8z8b" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.115686 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5smm\" (UniqueName: \"kubernetes.io/projected/36efba2c-72ec-4f18-b467-4dc7f2245de9-kube-api-access-s5smm\") pod \"olm-operator-6b444d44fb-5sr9r\" (UID: \"36efba2c-72ec-4f18-b467-4dc7f2245de9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5sr9r" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.116083 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t8z8b" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.121006 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbxnw" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.126811 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zdlqd" Feb 16 00:08:54 crc kubenswrapper[4698]: W0216 00:08:54.127763 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5ba3ac6_6e8d_4965_81f1_c1805efed27f.slice/crio-d8a5e1806b7e314717281d8b7adee3a4bde4af26475b3d2124ae4da83e0edd75 WatchSource:0}: Error finding container d8a5e1806b7e314717281d8b7adee3a4bde4af26475b3d2124ae4da83e0edd75: Status 404 returned error can't find the container with id d8a5e1806b7e314717281d8b7adee3a4bde4af26475b3d2124ae4da83e0edd75 Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.145847 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.160169 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqgww\" (UniqueName: \"kubernetes.io/projected/b89dcc24-5331-4c05-9a27-5f4415a7faf1-kube-api-access-rqgww\") pod \"control-plane-machine-set-operator-78cbb6b69f-dmvbr\" (UID: \"b89dcc24-5331-4c05-9a27-5f4415a7faf1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dmvbr" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.195764 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlbvc\" (UniqueName: \"kubernetes.io/projected/d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1-kube-api-access-jlbvc\") pod \"ingress-operator-5b745b69d9-nvkhl\" (UID: \"d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.198374 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fh5qr" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.199623 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.200671 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:54 crc kubenswrapper[4698]: E0216 00:08:54.201367 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:54.70134505 +0000 UTC m=+144.359243812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.213845 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dmvbr" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.215058 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5sr9r" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.245840 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz2w4\" (UniqueName: \"kubernetes.io/projected/9679204c-d5b6-489d-9f27-d84d360284ae-kube-api-access-qz2w4\") pod \"marketplace-operator-79b997595-wdfct\" (UID: \"9679204c-d5b6-489d-9f27-d84d360284ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.261802 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg8r6\" (UniqueName: \"kubernetes.io/projected/63fb50bd-c7be-4229-80c6-26017b6bac3b-kube-api-access-jg8r6\") pod \"dns-operator-744455d44c-db2vn\" (UID: \"63fb50bd-c7be-4229-80c6-26017b6bac3b\") " pod="openshift-dns-operator/dns-operator-744455d44c-db2vn" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.263993 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbkvz\" (UniqueName: \"kubernetes.io/projected/78a2ec7b-781f-4ec4-8920-377f90b037fb-kube-api-access-lbkvz\") pod \"package-server-manager-789f6589d5-cmfwl\" (UID: \"78a2ec7b-781f-4ec4-8920-377f90b037fb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cmfwl" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.269193 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c55z7\" (UniqueName: \"kubernetes.io/projected/291f913a-6566-409f-8663-e2f695edf9a6-kube-api-access-c55z7\") pod \"csi-hostpathplugin-rcxmx\" (UID: \"291f913a-6566-409f-8663-e2f695edf9a6\") " pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.271939 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.289356 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.295948 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgtb9\" (UniqueName: \"kubernetes.io/projected/b23a39ad-fdae-41b5-bc81-88ebee430bfb-kube-api-access-zgtb9\") pod \"ingress-canary-dc98z\" (UID: \"b23a39ad-fdae-41b5-bc81-88ebee430bfb\") " pod="openshift-ingress-canary/ingress-canary-dc98z" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.305974 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:54 crc kubenswrapper[4698]: E0216 00:08:54.306897 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:54.806866143 +0000 UTC m=+144.464764905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.307069 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.308252 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp48f\" (UniqueName: \"kubernetes.io/projected/051b67fb-7c0f-4bb5-b1e4-c0ce4708c665-kube-api-access-pp48f\") pod \"machine-config-server-l55t7\" (UID: \"051b67fb-7c0f-4bb5-b1e4-c0ce4708c665\") " pod="openshift-machine-config-operator/machine-config-server-l55t7" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.318964 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-l55t7" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.319493 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.321444 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9szx\" (UniqueName: \"kubernetes.io/projected/7292f832-f19e-49fb-9d13-7af3c57d876c-kube-api-access-d9szx\") pod \"service-ca-9c57cc56f-g6jpt\" (UID: \"7292f832-f19e-49fb-9d13-7af3c57d876c\") " pod="openshift-service-ca/service-ca-9c57cc56f-g6jpt" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.326860 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4qvcf" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.344670 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctvvr\" (UniqueName: \"kubernetes.io/projected/0aa34695-49b9-4a9f-bec0-db46d80d3f64-kube-api-access-ctvvr\") pod \"dns-default-ngf8q\" (UID: \"0aa34695-49b9-4a9f-bec0-db46d80d3f64\") " pod="openshift-dns/dns-default-ngf8q" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.368082 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b92s\" (UniqueName: \"kubernetes.io/projected/13e3631d-31f8-4769-b931-22e47b7f9e14-kube-api-access-2b92s\") pod \"packageserver-d55dfcdfc-8hk54\" (UID: \"13e3631d-31f8-4769-b931-22e47b7f9e14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.369258 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-v8kwd"] Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.371543 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x6shs"] Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.388215 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bsd9j"] Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.399518 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz4lh\" (UniqueName: \"kubernetes.io/projected/76dae780-f238-4e3f-9e31-38a47f1e1991-kube-api-access-rz4lh\") pod \"machine-config-controller-84d6567774-xks74\" (UID: \"76dae780-f238-4e3f-9e31-38a47f1e1991\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xks74" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.403416 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-db2vn" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.409324 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:54 crc kubenswrapper[4698]: E0216 00:08:54.411078 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:54.910035336 +0000 UTC m=+144.567934098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.453756 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.490130 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2dhzm" event={"ID":"b700b649-4899-457c-afe5-575cf4a8907e","Type":"ContainerStarted","Data":"764465a0cdda0086a8fcc396d0e686b91570d04a770d7891270bb1d7625d1369"} Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.492422 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b76p7" event={"ID":"84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc","Type":"ContainerStarted","Data":"35fecd075bb59966176c878a0881812a11f0492c43c6c39bdf3c8609d250383f"} Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.492449 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b76p7" event={"ID":"84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc","Type":"ContainerStarted","Data":"91203f993a1029dfd6c6941be5acb903b4e99c527d6b27f112507a2e73a358ef"} Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.493529 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" event={"ID":"a754e420-4dd0-4ab2-b492-f088b31c3dca","Type":"ContainerStarted","Data":"ab19e4ad465cc3b2a9b4d91c5141f7737a311bb45a8eb4db8376d049fac3d60f"} Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.495806 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" event={"ID":"92d745b7-0280-480b-b052-c2fd5499c43e","Type":"ContainerStarted","Data":"6dc5fadab5fbafba10a4ab05a8a4dc041df3879a7d29b05c67ca964d3a38f8bb"} Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.495837 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" event={"ID":"92d745b7-0280-480b-b052-c2fd5499c43e","Type":"ContainerStarted","Data":"060418a3b397f659319520cb0220b186322a5280359fc2199a39779c99311228"} Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.498500 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bg97c" event={"ID":"730d66a3-4e9f-4c36-9ce0-0c7e981b36ae","Type":"ContainerStarted","Data":"cb6c1b6a8e415591a3c5835ae77fb375b833fe61a3e9cc2bd32caf2db6b7c0f3"} Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.498535 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bg97c" event={"ID":"730d66a3-4e9f-4c36-9ce0-0c7e981b36ae","Type":"ContainerStarted","Data":"6ee1899130baf99466f2d823c369e5fb8ead5cb0615e599253379a92a4c5e222"} Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.501055 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" event={"ID":"b28deeab-0e2e-4d78-a65e-fbd423e08cf2","Type":"ContainerStarted","Data":"e4db63fc60146dbfcf2c99f7fbc64744e09404b7f96c5fb03797a28e0242d85f"} Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.502690 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" event={"ID":"d5ba3ac6-6e8d-4965-81f1-c1805efed27f","Type":"ContainerStarted","Data":"d8a5e1806b7e314717281d8b7adee3a4bde4af26475b3d2124ae4da83e0edd75"} Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.511376 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:54 crc kubenswrapper[4698]: E0216 00:08:54.511632 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:55.011565112 +0000 UTC m=+144.669463884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.512292 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:54 crc kubenswrapper[4698]: E0216 00:08:54.513469 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:55.013459421 +0000 UTC m=+144.671358183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.527208 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cmfwl" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.535257 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.549717 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xks74" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.555076 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-g6jpt" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.562735 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dc98z" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.583364 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ngf8q" Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.614098 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:54 crc kubenswrapper[4698]: E0216 00:08:54.614649 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:55.11462736 +0000 UTC m=+144.772526122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.668584 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4j499"] Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.679956 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bhfmf"] Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.683749 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29520000-k87pz"] Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.715800 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:54 crc kubenswrapper[4698]: E0216 00:08:54.716194 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:55.216178688 +0000 UTC m=+144.874077460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.819163 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:54 crc kubenswrapper[4698]: E0216 00:08:54.819356 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:55.319336801 +0000 UTC m=+144.977235563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.820043 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:54 crc kubenswrapper[4698]: E0216 00:08:54.820502 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:55.320491845 +0000 UTC m=+144.978390607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.875341 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cbn44"] Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.909830 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-kkhhq"] Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.909933 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rn7cx"] Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.924710 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:54 crc kubenswrapper[4698]: E0216 00:08:54.926062 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:55.425930553 +0000 UTC m=+145.083829315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:54 crc kubenswrapper[4698]: I0216 00:08:54.978833 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zdlqd"] Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.012929 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t8g5k"] Feb 16 00:08:55 crc kubenswrapper[4698]: W0216 00:08:55.024710 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1791eeb4_5348_4fe9_9cea_0eba0d00c869.slice/crio-6049a2e5bb0ee69cd599f5fd5e76d3d1070fa7390a9c4653149ef90e17d0aef3 WatchSource:0}: Error finding container 6049a2e5bb0ee69cd599f5fd5e76d3d1070fa7390a9c4653149ef90e17d0aef3: Status 404 returned error can't find the container with id 6049a2e5bb0ee69cd599f5fd5e76d3d1070fa7390a9c4653149ef90e17d0aef3 Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.028959 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:55 crc kubenswrapper[4698]: E0216 00:08:55.030044 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:55.53002758 +0000 UTC m=+145.187926342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.040905 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g9bpw"] Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.045301 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xt8xd"] Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.047778 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm"] Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.052579 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6"] Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.106174 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbxnw"] Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.132144 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:55 crc kubenswrapper[4698]: E0216 00:08:55.132389 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:55.632350314 +0000 UTC m=+145.290249076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.132840 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:55 crc kubenswrapper[4698]: E0216 00:08:55.133876 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:55.633849543 +0000 UTC m=+145.291748495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:55 crc kubenswrapper[4698]: W0216 00:08:55.148765 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod396cf171_d75d_4c04_9211_098223756513.slice/crio-7270a5db7c8be1936bdd2d2a683d76029fee9097ca276eaaa87e6c111c1bd589 WatchSource:0}: Error finding container 7270a5db7c8be1936bdd2d2a683d76029fee9097ca276eaaa87e6c111c1bd589: Status 404 returned error can't find the container with id 7270a5db7c8be1936bdd2d2a683d76029fee9097ca276eaaa87e6c111c1bd589 Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.190010 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6wzm8"] Feb 16 00:08:55 crc kubenswrapper[4698]: W0216 00:08:55.199990 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7af2ddd8_b028_4713_8550_6cac706db73f.slice/crio-e469b3c79b1f3bb481f7b68b1938d80ca08017d11356fce6dbf793e7d9b87c4a WatchSource:0}: Error finding container e469b3c79b1f3bb481f7b68b1938d80ca08017d11356fce6dbf793e7d9b87c4a: Status 404 returned error can't find the container with id e469b3c79b1f3bb481f7b68b1938d80ca08017d11356fce6dbf793e7d9b87c4a Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.234443 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:55 crc kubenswrapper[4698]: E0216 00:08:55.234925 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:55.734883307 +0000 UTC m=+145.392782069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.267197 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t8z8b"] Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.324846 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt"] Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.346924 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:55 crc kubenswrapper[4698]: E0216 00:08:55.348423 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:55.848402402 +0000 UTC m=+145.506301164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.411696 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wdfct"] Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.446317 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4qvcf"] Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.453006 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:55 crc kubenswrapper[4698]: E0216 00:08:55.453407 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:55.95338757 +0000 UTC m=+145.611286332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.515943 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6" event={"ID":"f90bba31-5d41-4e72-86b5-2b208c15fd27","Type":"ContainerStarted","Data":"5f1b031ab51fa93ac784c3e74b1e2591246b684310444b11aebd51d8531958bf"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.519656 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29520000-k87pz" event={"ID":"a0c45070-058d-4223-a78e-11b1319eff38","Type":"ContainerStarted","Data":"7675d8493545d0432b6904a9aaa36fe57d860f4daa713a7df2fa55be196ed986"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.522917 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bsd9j" event={"ID":"3536e99a-ec06-422f-9944-20d3e4eca295","Type":"ContainerStarted","Data":"0270fad976de1fc78a2ee553156139492885298a9edadb5bed2b291928342784"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.524192 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bhfmf" event={"ID":"f5b48014-cea0-4a23-80a0-0022370c5e7c","Type":"ContainerStarted","Data":"c8041217d4dddd7731bfd01fd75f2884410eaa7f7c006357a41771b161e1ae7f"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.528578 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-v8kwd" event={"ID":"f6a60c76-1a30-4b5c-a984-08eef4aedb2b","Type":"ContainerStarted","Data":"0b2a946270e1b25e8df03eafd6866e3bbfa5da6ddb92797b709bf3dab4c8c867"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.534983 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t8z8b" event={"ID":"3361de5c-c22b-46a4-b354-13c3e16a3d78","Type":"ContainerStarted","Data":"33048f92824416b2e3f46e94e1a763a71715deba38c51b8d70285cb2de51c057"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.555752 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:55 crc kubenswrapper[4698]: E0216 00:08:55.557222 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:56.057204103 +0000 UTC m=+145.715102875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.621821 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t8g5k" event={"ID":"965a4f3a-6e82-46eb-964a-4c7f40fdaf0e","Type":"ContainerStarted","Data":"30c6060def8f89f04f4786fb606801819c08e5136086408c2f545513670a174f"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.657685 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:55 crc kubenswrapper[4698]: E0216 00:08:55.657960 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:56.157917242 +0000 UTC m=+145.815816004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.658034 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:55 crc kubenswrapper[4698]: E0216 00:08:55.659636 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:56.15960394 +0000 UTC m=+145.817502702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.660122 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g9bpw" event={"ID":"396cf171-d75d-4c04-9211-098223756513","Type":"ContainerStarted","Data":"7270a5db7c8be1936bdd2d2a683d76029fee9097ca276eaaa87e6c111c1bd589"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.661889 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-db2vn"] Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.661951 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xt8xd" event={"ID":"1791eeb4-5348-4fe9-9cea-0eba0d00c869","Type":"ContainerStarted","Data":"6049a2e5bb0ee69cd599f5fd5e76d3d1070fa7390a9c4653149ef90e17d0aef3"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.678603 4698 generic.go:334] "Generic (PLEG): container finished" podID="730d66a3-4e9f-4c36-9ce0-0c7e981b36ae" containerID="cb6c1b6a8e415591a3c5835ae77fb375b833fe61a3e9cc2bd32caf2db6b7c0f3" exitCode=0 Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.678762 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bg97c" event={"ID":"730d66a3-4e9f-4c36-9ce0-0c7e981b36ae","Type":"ContainerDied","Data":"cb6c1b6a8e415591a3c5835ae77fb375b833fe61a3e9cc2bd32caf2db6b7c0f3"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.683236 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4j499" event={"ID":"3f3cf6a0-528b-4e38-9e30-f274f3caa4a4","Type":"ContainerStarted","Data":"ac1093571c8a2ec577787a328ee16f15a463f5787e2b8676b9a5896ac0053e2c"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.689398 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cbn44" event={"ID":"981e423b-1add-418e-bd9b-01d92d9a66cf","Type":"ContainerStarted","Data":"fe73ecb79889b7bb28801a9186d7b195a81e454268f9873eb5b19c088ca2b0bc"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.697321 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fh5qr"] Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.702672 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbxnw" event={"ID":"2162c872-a146-4e1c-8173-103f521103f0","Type":"ContainerStarted","Data":"cc0cc1a33f4b4790a3b0f7fc8df777b8d8d70fe43f63e9de4fc3e70e50fbfe53"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.705444 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" event={"ID":"96145a82-f664-45ba-805c-3721f813c8a9","Type":"ContainerStarted","Data":"eb7f1490fb0b6707153dcb71a9a44ce1d154fb7c5b42dc4c8615051fe970dac2"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.706604 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt" event={"ID":"d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96","Type":"ContainerStarted","Data":"7e63ca6d87c58dcc58b21fa18f5434f40375fd87fd7a2c94c0bb43a7229e1df4"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.708467 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" event={"ID":"a754e420-4dd0-4ab2-b492-f088b31c3dca","Type":"ContainerStarted","Data":"31dc715f3a3fc903a2f86f4827056c268a8e0f86ee80aacac00250e9fd1122ec"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.709795 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x6shs" event={"ID":"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25","Type":"ContainerStarted","Data":"b895eb6d9f74c99531dd85d4bba92d6a3dd988e41312c85c3e3cd87d243e4863"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.710814 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" event={"ID":"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf","Type":"ContainerStarted","Data":"214a4b4d6e1d3c8cf3204afcb005d5363f14ba66c28b4ca524edf858066b633c"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.712369 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2dhzm" event={"ID":"b700b649-4899-457c-afe5-575cf4a8907e","Type":"ContainerStarted","Data":"55c7fe8a4f9d26be3af8c8d410c93d3c03db7cffd673b0eb8f84477c2a59349f"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.713684 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zdlqd" event={"ID":"3ca33a8b-6e95-4a0f-9892-53b18a92b078","Type":"ContainerStarted","Data":"da0e5a598fed9daf94e27c804db530bccf71a2eb0cbd1f70cd67eb2c1f5a15d1"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.716730 4698 generic.go:334] "Generic (PLEG): container finished" podID="b28deeab-0e2e-4d78-a65e-fbd423e08cf2" containerID="e492c1779295fcfee33ecf60670f26b17c07a3407f8df13b103760004bb7a2d7" exitCode=0 Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.716778 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" event={"ID":"b28deeab-0e2e-4d78-a65e-fbd423e08cf2","Type":"ContainerDied","Data":"e492c1779295fcfee33ecf60670f26b17c07a3407f8df13b103760004bb7a2d7"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.718249 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm" event={"ID":"7af2ddd8-b028-4713-8550-6cac706db73f","Type":"ContainerStarted","Data":"e469b3c79b1f3bb481f7b68b1938d80ca08017d11356fce6dbf793e7d9b87c4a"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.719509 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" event={"ID":"517bde6b-579b-4047-a627-315b3722d147","Type":"ContainerStarted","Data":"377c82966ba30daf7aaa7e685f9f81625bed65cb24d30704eeacceb73f12b59a"} Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.719553 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.723783 4698 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-k8vxr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.723831 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" podUID="92d745b7-0280-480b-b052-c2fd5499c43e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.759015 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:55 crc kubenswrapper[4698]: E0216 00:08:55.759285 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:56.25925577 +0000 UTC m=+145.917154532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.759875 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:55 crc kubenswrapper[4698]: E0216 00:08:55.760829 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:56.260819872 +0000 UTC m=+145.918718634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.850729 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.864068 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:55 crc kubenswrapper[4698]: E0216 00:08:55.865505 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:56.365488865 +0000 UTC m=+146.023387627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.930935 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5sr9r"] Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.938470 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.938571 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.949966 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rcxmx"] Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.972702 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:55 crc kubenswrapper[4698]: E0216 00:08:55.973419 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:56.473391869 +0000 UTC m=+146.131290631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:55 crc kubenswrapper[4698]: I0216 00:08:55.980377 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dmvbr"] Feb 16 00:08:56 crc kubenswrapper[4698]: W0216 00:08:56.008857 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63fb50bd_c7be_4229_80c6_26017b6bac3b.slice/crio-fcfb71f3a9e75ca6c94ea55f6504618c14f531b7034d138d218a20fdcacfaa06 WatchSource:0}: Error finding container fcfb71f3a9e75ca6c94ea55f6504618c14f531b7034d138d218a20fdcacfaa06: Status 404 returned error can't find the container with id fcfb71f3a9e75ca6c94ea55f6504618c14f531b7034d138d218a20fdcacfaa06 Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.063321 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" podStartSLOduration=118.063300864 podStartE2EDuration="1m58.063300864s" podCreationTimestamp="2026-02-16 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:56.01316522 +0000 UTC m=+145.671063982" watchObservedRunningTime="2026-02-16 00:08:56.063300864 +0000 UTC m=+145.721199626" Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.074987 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:56 crc kubenswrapper[4698]: E0216 00:08:56.082012 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:56.581964363 +0000 UTC m=+146.239863125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.091147 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:56 crc kubenswrapper[4698]: E0216 00:08:56.091583 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:56.59156851 +0000 UTC m=+146.249467272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.107699 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-2dhzm" podStartSLOduration=119.107606997 podStartE2EDuration="1m59.107606997s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:56.101542344 +0000 UTC m=+145.759441106" watchObservedRunningTime="2026-02-16 00:08:56.107606997 +0000 UTC m=+145.765505759" Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.149953 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl"] Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.166794 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn"] Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.169221 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dc98z"] Feb 16 00:08:56 crc kubenswrapper[4698]: W0216 00:08:56.183840 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod291f913a_6566_409f_8663_e2f695edf9a6.slice/crio-1e620d6ec2fb0f4d335fdfe66d28caa42538ad8fea048f958e48a10156d47d02 WatchSource:0}: Error finding container 1e620d6ec2fb0f4d335fdfe66d28caa42538ad8fea048f958e48a10156d47d02: Status 404 returned error can't find the container with id 1e620d6ec2fb0f4d335fdfe66d28caa42538ad8fea048f958e48a10156d47d02 Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.192364 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:56 crc kubenswrapper[4698]: E0216 00:08:56.192787 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:56.692760252 +0000 UTC m=+146.350659024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.193594 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:56 crc kubenswrapper[4698]: E0216 00:08:56.193901 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:56.693892974 +0000 UTC m=+146.351791736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.207237 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ngf8q"] Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.221056 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vkwc" podStartSLOduration=119.221030738 podStartE2EDuration="1m59.221030738s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:56.18435242 +0000 UTC m=+145.842251182" watchObservedRunningTime="2026-02-16 00:08:56.221030738 +0000 UTC m=+145.878929500" Feb 16 00:08:56 crc kubenswrapper[4698]: W0216 00:08:56.229600 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36efba2c_72ec_4f18_b467_4dc7f2245de9.slice/crio-2544e1df1cd24c3eb9c512a6956de9b078d07b5ded381dfd53e16c462c32aba6 WatchSource:0}: Error finding container 2544e1df1cd24c3eb9c512a6956de9b078d07b5ded381dfd53e16c462c32aba6: Status 404 returned error can't find the container with id 2544e1df1cd24c3eb9c512a6956de9b078d07b5ded381dfd53e16c462c32aba6 Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.294480 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:56 crc kubenswrapper[4698]: E0216 00:08:56.294905 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:56.794867895 +0000 UTC m=+146.452766847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:56 crc kubenswrapper[4698]: W0216 00:08:56.308434 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2abff28e_c9b9_41b3_be02_5876c5e4b91d.slice/crio-a689b1d79c1cc55490333c2871d93d5930cd8e0e8d56efc13a2a98d1e75a8eaf WatchSource:0}: Error finding container a689b1d79c1cc55490333c2871d93d5930cd8e0e8d56efc13a2a98d1e75a8eaf: Status 404 returned error can't find the container with id a689b1d79c1cc55490333c2871d93d5930cd8e0e8d56efc13a2a98d1e75a8eaf Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.315034 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-g6jpt"] Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.362865 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54"] Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.395140 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cmfwl"] Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.396640 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:56 crc kubenswrapper[4698]: E0216 00:08:56.397194 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:56.897177938 +0000 UTC m=+146.555076700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.423058 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xks74"] Feb 16 00:08:56 crc kubenswrapper[4698]: W0216 00:08:56.459916 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13e3631d_31f8_4769_b931_22e47b7f9e14.slice/crio-7e283116e3ccc30f775bb360cfa0734f84b5769b3749e07ed0f1fccc29684ef0 WatchSource:0}: Error finding container 7e283116e3ccc30f775bb360cfa0734f84b5769b3749e07ed0f1fccc29684ef0: Status 404 returned error can't find the container with id 7e283116e3ccc30f775bb360cfa0734f84b5769b3749e07ed0f1fccc29684ef0 Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.497482 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:56 crc kubenswrapper[4698]: E0216 00:08:56.497803 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:56.997772332 +0000 UTC m=+146.655671094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.498026 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:56 crc kubenswrapper[4698]: E0216 00:08:56.498483 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:56.998467464 +0000 UTC m=+146.656366216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.599555 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:56 crc kubenswrapper[4698]: E0216 00:08:56.599745 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:57.099714017 +0000 UTC m=+146.757612799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.600542 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:56 crc kubenswrapper[4698]: E0216 00:08:56.600866 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:57.10085466 +0000 UTC m=+146.758753432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.702450 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:56 crc kubenswrapper[4698]: E0216 00:08:56.702688 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:57.20265464 +0000 UTC m=+146.860553402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.702873 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:56 crc kubenswrapper[4698]: E0216 00:08:56.703284 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:57.203263028 +0000 UTC m=+146.861161800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.731384 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" event={"ID":"d5ba3ac6-6e8d-4965-81f1-c1805efed27f","Type":"ContainerStarted","Data":"64b357295a4aff3e20539057d5bf09b8b7005fb53711468ce327ccc8ef917211"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.731627 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.733805 4698 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ckzvr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.733878 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" podUID="d5ba3ac6-6e8d-4965-81f1-c1805efed27f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.738389 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dc98z" event={"ID":"b23a39ad-fdae-41b5-bc81-88ebee430bfb","Type":"ContainerStarted","Data":"1ac679c67ef11f317414a0c332a1733e2d78a2fde459506701cafc486327939e"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.758941 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" podStartSLOduration=119.75891801 podStartE2EDuration="1m59.75891801s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:56.755464859 +0000 UTC m=+146.413363621" watchObservedRunningTime="2026-02-16 00:08:56.75891801 +0000 UTC m=+146.416816772" Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.764541 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-db2vn" event={"ID":"63fb50bd-c7be-4229-80c6-26017b6bac3b","Type":"ContainerStarted","Data":"fcfb71f3a9e75ca6c94ea55f6504618c14f531b7034d138d218a20fdcacfaa06"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.765883 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-l55t7" event={"ID":"051b67fb-7c0f-4bb5-b1e4-c0ce4708c665","Type":"ContainerStarted","Data":"66773ecb0d95976c1c548e645ddadcd19bdcef43f1e2df4869e3b9b816335464"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.766766 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn" event={"ID":"2abff28e-c9b9-41b3-be02-5876c5e4b91d","Type":"ContainerStarted","Data":"a689b1d79c1cc55490333c2871d93d5930cd8e0e8d56efc13a2a98d1e75a8eaf"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.767542 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xks74" event={"ID":"76dae780-f238-4e3f-9e31-38a47f1e1991","Type":"ContainerStarted","Data":"863ca47b242e471d629151a67c919ff29246b79130cc7f7e3c137d747f03fea0"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.769400 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cbn44" event={"ID":"981e423b-1add-418e-bd9b-01d92d9a66cf","Type":"ContainerStarted","Data":"7b7c06bce19c1db71c85236951832e748de9b32b871d0a2c27d69b700feb7676"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.771253 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4qvcf" event={"ID":"ec24b0ba-9563-4228-af90-7774e49f5505","Type":"ContainerStarted","Data":"836669c3f0ba2f083c66ef28c8b02cb0275f98d9e8980449eb0012d887329150"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.774841 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x6shs" event={"ID":"4678f0b3-74d6-4ea2-9294-6c9bc5e9de25","Type":"ContainerStarted","Data":"11660ad788cb1edec2ece8d8572f36af230e643c128e77317a17b967706f9e1a"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.776668 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t8z8b" event={"ID":"3361de5c-c22b-46a4-b354-13c3e16a3d78","Type":"ContainerStarted","Data":"cbe81091b8299d5546d806c58e5428fec6e1dbf591c7616cfe7a734926773e11"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.778890 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29520000-k87pz" event={"ID":"a0c45070-058d-4223-a78e-11b1319eff38","Type":"ContainerStarted","Data":"63c1459bceb154cd5104d1a8d0e699c96e4ee0f5d29054ccc48c736801f521d2"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.780114 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g9bpw" event={"ID":"396cf171-d75d-4c04-9211-098223756513","Type":"ContainerStarted","Data":"bdd57b3f1f407700eb290973df1ef92b0e70ac7796701a26964ebcc8d6183938"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.785482 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dmvbr" event={"ID":"b89dcc24-5331-4c05-9a27-5f4415a7faf1","Type":"ContainerStarted","Data":"f7c45c0cb4da57fbac95a057d66bb7bf4c1c3338607b966d9650c77962a5123d"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.787634 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bsd9j" event={"ID":"3536e99a-ec06-422f-9944-20d3e4eca295","Type":"ContainerStarted","Data":"bb9eebef946859de0012116e8f0435e8c42ae7baf8c30b2070261c2edaf19e0f"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.788337 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-bsd9j" Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.790208 4698 patch_prober.go:28] interesting pod/console-operator-58897d9998-bsd9j container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.790272 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bsd9j" podUID="3536e99a-ec06-422f-9944-20d3e4eca295" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.793492 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" event={"ID":"9679204c-d5b6-489d-9f27-d84d360284ae","Type":"ContainerStarted","Data":"71b2ec4ec2be8c26b30d41840e4d9d5786414bc380ab249a4a3429992b6069cf"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.793825 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t8z8b" podStartSLOduration=119.793800843 podStartE2EDuration="1m59.793800843s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:56.79221576 +0000 UTC m=+146.450114512" watchObservedRunningTime="2026-02-16 00:08:56.793800843 +0000 UTC m=+146.451699615" Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.795115 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54" event={"ID":"13e3631d-31f8-4769-b931-22e47b7f9e14","Type":"ContainerStarted","Data":"7e283116e3ccc30f775bb360cfa0734f84b5769b3749e07ed0f1fccc29684ef0"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.800789 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-g6jpt" event={"ID":"7292f832-f19e-49fb-9d13-7af3c57d876c","Type":"ContainerStarted","Data":"b29970fd65abc19771fb61920ad6807a1ccc117f31bc92a6d2eb1c7ffac140ea"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.804127 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6" event={"ID":"f90bba31-5d41-4e72-86b5-2b208c15fd27","Type":"ContainerStarted","Data":"69e128f078664eb17497bc1c710b6c2019950f705473ad2a652d4e2f658b51de"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.805190 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:56 crc kubenswrapper[4698]: E0216 00:08:56.805392 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:57.305357951 +0000 UTC m=+146.963256713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.805881 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:56 crc kubenswrapper[4698]: E0216 00:08:56.807901 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:57.307881909 +0000 UTC m=+146.965780771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.810553 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-v8kwd" event={"ID":"f6a60c76-1a30-4b5c-a984-08eef4aedb2b","Type":"ContainerStarted","Data":"b9c64d2522f613869839926b9a07d61ca2facd416cd73f92c6c88e15351cb706"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.815431 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cmfwl" event={"ID":"78a2ec7b-781f-4ec4-8920-377f90b037fb","Type":"ContainerStarted","Data":"518a780dda1e11d1e9571914a25db385e6a13ffb5610bcd450670d342521ed67"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.819652 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5sr9r" event={"ID":"36efba2c-72ec-4f18-b467-4dc7f2245de9","Type":"ContainerStarted","Data":"2544e1df1cd24c3eb9c512a6956de9b078d07b5ded381dfd53e16c462c32aba6"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.830944 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm" event={"ID":"7af2ddd8-b028-4713-8550-6cac706db73f","Type":"ContainerStarted","Data":"d5d3080c184fa858663164f54ba57a96566e22643efe897e9dd8ec300dfcc0c8"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.832845 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm" Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.835895 4698 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qp9bm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.835976 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm" podUID="7af2ddd8-b028-4713-8550-6cac706db73f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.837883 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" event={"ID":"291f913a-6566-409f-8663-e2f695edf9a6","Type":"ContainerStarted","Data":"1e620d6ec2fb0f4d335fdfe66d28caa42538ad8fea048f958e48a10156d47d02"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.844086 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-bsd9j" podStartSLOduration=119.844053042 podStartE2EDuration="1m59.844053042s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:56.833330053 +0000 UTC m=+146.491228815" watchObservedRunningTime="2026-02-16 00:08:56.844053042 +0000 UTC m=+146.501951814" Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.844968 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.845026 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.848207 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" event={"ID":"517bde6b-579b-4047-a627-315b3722d147","Type":"ContainerStarted","Data":"97ed0a85cb303686357f2360be26ce916c5c803a52c0f36a7b986808bbe94ec1"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.850135 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl" event={"ID":"d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1","Type":"ContainerStarted","Data":"b882f111a5c79021690b50749716bbd4fe1bce3d1de5507db3a87529dfc3eb8b"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.852979 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fh5qr" event={"ID":"260201f8-5e3b-4ba4-a945-13fe10a8ad3a","Type":"ContainerStarted","Data":"7b725c377d28cc57c83d088e80b2775ec3982a7fe58ec9e9a317cd10758b7007"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.851317 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29520000-k87pz" podStartSLOduration=119.851210916 podStartE2EDuration="1m59.851210916s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:56.810377435 +0000 UTC m=+146.468276197" watchObservedRunningTime="2026-02-16 00:08:56.851210916 +0000 UTC m=+146.509109678" Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.858177 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm" podStartSLOduration=118.858153079 podStartE2EDuration="1m58.858153079s" podCreationTimestamp="2026-02-16 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:56.856391377 +0000 UTC m=+146.514290139" watchObservedRunningTime="2026-02-16 00:08:56.858153079 +0000 UTC m=+146.516051841" Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.859094 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ngf8q" event={"ID":"0aa34695-49b9-4a9f-bec0-db46d80d3f64","Type":"ContainerStarted","Data":"d3cf0724ac1f298d5281fecb39075af666a02211e786632f2f154993d7686e7e"} Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.860868 4698 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-k8vxr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.860937 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" podUID="92d745b7-0280-480b-b052-c2fd5499c43e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.909385 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:56 crc kubenswrapper[4698]: E0216 00:08:56.909786 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:57.409752732 +0000 UTC m=+147.067651494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:56 crc kubenswrapper[4698]: I0216 00:08:56.911967 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:56 crc kubenswrapper[4698]: E0216 00:08:56.916185 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:57.416156879 +0000 UTC m=+147.074055831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.015264 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:57 crc kubenswrapper[4698]: E0216 00:08:57.015451 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:57.515415971 +0000 UTC m=+147.173314733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.016110 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:57 crc kubenswrapper[4698]: E0216 00:08:57.017494 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:57.517483707 +0000 UTC m=+147.175382469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.117941 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:57 crc kubenswrapper[4698]: E0216 00:08:57.119023 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:57.618993933 +0000 UTC m=+147.276892705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.220311 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:57 crc kubenswrapper[4698]: E0216 00:08:57.220767 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:57.72074568 +0000 UTC m=+147.378644442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.321422 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:57 crc kubenswrapper[4698]: E0216 00:08:57.322043 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:57.822011444 +0000 UTC m=+147.479910206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.322186 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:57 crc kubenswrapper[4698]: E0216 00:08:57.322852 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:57.822829422 +0000 UTC m=+147.480728184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.423371 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:57 crc kubenswrapper[4698]: E0216 00:08:57.424287 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:57.924268255 +0000 UTC m=+147.582167017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.525500 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:57 crc kubenswrapper[4698]: E0216 00:08:57.525874 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:58.025858735 +0000 UTC m=+147.683757497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.626506 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:57 crc kubenswrapper[4698]: E0216 00:08:57.626989 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:58.126972121 +0000 UTC m=+147.784870883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.728144 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:57 crc kubenswrapper[4698]: E0216 00:08:57.728726 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:58.228703408 +0000 UTC m=+147.886602230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.829010 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:57 crc kubenswrapper[4698]: E0216 00:08:57.829497 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:58.329474509 +0000 UTC m=+147.987373271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.855535 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:08:57 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:08:57 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:08:57 crc kubenswrapper[4698]: healthz check failed Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.855599 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.885886 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn" event={"ID":"2abff28e-c9b9-41b3-be02-5876c5e4b91d","Type":"ContainerStarted","Data":"5b86dd320fbd561b96526ba401b169a22a1ec49cb7c74391edd499ddd255893f"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.892674 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5sr9r" event={"ID":"36efba2c-72ec-4f18-b467-4dc7f2245de9","Type":"ContainerStarted","Data":"5229f88c5b3ec4dfe6df0e117fc6d0f334b9a74930f593ca24738826501ece8b"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.898960 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt" event={"ID":"d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96","Type":"ContainerStarted","Data":"7001aa0d8311fdc1ccb48532abb3beb6619577ea3e98362f5338634a0001fb24"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.907081 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-l55t7" event={"ID":"051b67fb-7c0f-4bb5-b1e4-c0ce4708c665","Type":"ContainerStarted","Data":"783a04f23ebbe22e356bbb726816d06a8f5d76689281eb2d1d2a2457f94fb748"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.907431 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99zqn" podStartSLOduration=120.907413398 podStartE2EDuration="2m0.907413398s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:57.904562005 +0000 UTC m=+147.562460787" watchObservedRunningTime="2026-02-16 00:08:57.907413398 +0000 UTC m=+147.565312160" Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.909785 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6" event={"ID":"f90bba31-5d41-4e72-86b5-2b208c15fd27","Type":"ContainerStarted","Data":"af87e4a1f0ca781f4376eebc6d196bc736acced0cd0459967be4660df36749fc"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.912752 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dmvbr" event={"ID":"b89dcc24-5331-4c05-9a27-5f4415a7faf1","Type":"ContainerStarted","Data":"c9bd2ee5ada08633d5828b38398f819ba0b68aadce5e3732438553766b318b1c"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.923515 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt" podStartSLOduration=120.923494487 podStartE2EDuration="2m0.923494487s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:57.922305491 +0000 UTC m=+147.580204253" watchObservedRunningTime="2026-02-16 00:08:57.923494487 +0000 UTC m=+147.581393249" Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.927698 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b76p7" event={"ID":"84c6b07e-a3a1-42de-89b9-f8d0f58f0cfc","Type":"ContainerStarted","Data":"9afd2789b06e83274c95b788468c066758f152f00b9b05b0a1df4d9643f10e22"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.930554 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:57 crc kubenswrapper[4698]: E0216 00:08:57.931252 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:58.431227907 +0000 UTC m=+148.089126729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.932104 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4j499" event={"ID":"3f3cf6a0-528b-4e38-9e30-f274f3caa4a4","Type":"ContainerStarted","Data":"b70a6002e3d607bb7bb5643d0e6b1dd15a39bc0c0c47f279fa1aad5d6b82efd5"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.934464 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zdlqd" event={"ID":"3ca33a8b-6e95-4a0f-9892-53b18a92b078","Type":"ContainerStarted","Data":"26ec95a00e02ee11e215e91a29c5e5077d0ce698b4b6ac988846ecb5fdfba245"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.937802 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fh5qr" event={"ID":"260201f8-5e3b-4ba4-a945-13fe10a8ad3a","Type":"ContainerStarted","Data":"5a334bbaae0647639bdf90743b705ed240805ac811ac1a71c65746336c7b5737"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.943185 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xt8xd" event={"ID":"1791eeb4-5348-4fe9-9cea-0eba0d00c869","Type":"ContainerStarted","Data":"7a0f3c589e6df66dc034c3f1d57719f7828cd604ace9e90602cfeacf329e8e2b"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.943264 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xt8xd" event={"ID":"1791eeb4-5348-4fe9-9cea-0eba0d00c869","Type":"ContainerStarted","Data":"38ca39ae60b807e8d60d02cad8b1c9cf7e8fe4f14bc2c0e33fbfaeb7f1d50451"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.943399 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dmvbr" podStartSLOduration=120.943383302 podStartE2EDuration="2m0.943383302s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:57.942070401 +0000 UTC m=+147.599969163" watchObservedRunningTime="2026-02-16 00:08:57.943383302 +0000 UTC m=+147.601282064" Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.950123 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl" event={"ID":"d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1","Type":"ContainerStarted","Data":"a009d3abd85428097464af7b2f48310deb955c0cc7994be628759772b1aed08e"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.951930 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" event={"ID":"5a30a526-d4c0-437f-a4e5-97b8a4dd32cf","Type":"ContainerStarted","Data":"bc11bf03b47a80cfc7f3d013f0486ec235983844257363594a5bcbaeb454f0bc"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.959440 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-l55t7" podStartSLOduration=6.9594214789999995 podStartE2EDuration="6.959421479s" podCreationTimestamp="2026-02-16 00:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:57.958138109 +0000 UTC m=+147.616036871" watchObservedRunningTime="2026-02-16 00:08:57.959421479 +0000 UTC m=+147.617320241" Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.971886 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dc98z" event={"ID":"b23a39ad-fdae-41b5-bc81-88ebee430bfb","Type":"ContainerStarted","Data":"716be0d81da4cf6a0c8a078b9f804e9dee8a5ae1fa734aac767ff1e4e12b762d"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.974081 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4qvcf" event={"ID":"ec24b0ba-9563-4228-af90-7774e49f5505","Type":"ContainerStarted","Data":"ac2684e00fdde09bd0242ce13382108463a035920a431aeccd84642780f860df"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.974726 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4qvcf" Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.975968 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ngf8q" event={"ID":"0aa34695-49b9-4a9f-bec0-db46d80d3f64","Type":"ContainerStarted","Data":"ebb5c464d9c9b96c79943f10364e52d0789e535de898ec221afe2fb9865a1ed2"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.976064 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-4qvcf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.976108 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4qvcf" podUID="ec24b0ba-9563-4228-af90-7774e49f5505" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.988882 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" event={"ID":"9679204c-d5b6-489d-9f27-d84d360284ae","Type":"ContainerStarted","Data":"c720b800721bdd8f1712d4e0d15bc8527d36ea11c701fe5150fd9f4006c94cbe"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.990234 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.992432 4698 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wdfct container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.992605 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" podUID="9679204c-d5b6-489d-9f27-d84d360284ae" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.994443 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-6wzm8" podStartSLOduration=120.994428249 podStartE2EDuration="2m0.994428249s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:57.987856383 +0000 UTC m=+147.645755145" watchObservedRunningTime="2026-02-16 00:08:57.994428249 +0000 UTC m=+147.652327011" Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.994499 4698 generic.go:334] "Generic (PLEG): container finished" podID="517bde6b-579b-4047-a627-315b3722d147" containerID="97ed0a85cb303686357f2360be26ce916c5c803a52c0f36a7b986808bbe94ec1" exitCode=0 Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.994578 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" event={"ID":"517bde6b-579b-4047-a627-315b3722d147","Type":"ContainerDied","Data":"97ed0a85cb303686357f2360be26ce916c5c803a52c0f36a7b986808bbe94ec1"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.994629 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" event={"ID":"517bde6b-579b-4047-a627-315b3722d147","Type":"ContainerStarted","Data":"a99f421222bc8370c3aa72f2177f4bae66797c203da8fc88ac08c54a375ad177"} Feb 16 00:08:57 crc kubenswrapper[4698]: I0216 00:08:57.998877 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" event={"ID":"96145a82-f664-45ba-805c-3721f813c8a9","Type":"ContainerStarted","Data":"25d4fb2305e64b85e157f46002c4fd01b56a7eb48ff51880f6db655ff7e6027e"} Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.000253 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.002776 4698 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rn7cx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.002819 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" podUID="96145a82-f664-45ba-805c-3721f813c8a9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.004558 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" event={"ID":"b28deeab-0e2e-4d78-a65e-fbd423e08cf2","Type":"ContainerStarted","Data":"addf9501b4ee5215568d3bb721df000895356b79df0d6d0838df5d24f6c7d5a9"} Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.014442 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4j499" podStartSLOduration=121.014417679 podStartE2EDuration="2m1.014417679s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:58.014165387 +0000 UTC m=+147.672064169" watchObservedRunningTime="2026-02-16 00:08:58.014417679 +0000 UTC m=+147.672316451" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.018074 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-db2vn" event={"ID":"63fb50bd-c7be-4229-80c6-26017b6bac3b","Type":"ContainerStarted","Data":"e1da523ffd9c71f44dbc3399969cb759c1d8da3de7558cc36c8077c2d53ca539"} Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.028951 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-v8kwd" event={"ID":"f6a60c76-1a30-4b5c-a984-08eef4aedb2b","Type":"ContainerStarted","Data":"1f3851d2009935965714e6396ce9233f08bfe44589d7505af3b05e6f3b80b48b"} Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.033169 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:58 crc kubenswrapper[4698]: E0216 00:08:58.033631 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:58.533568821 +0000 UTC m=+148.191467583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.036460 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:58 crc kubenswrapper[4698]: E0216 00:08:58.037659 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:58.537635641 +0000 UTC m=+148.195534573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.044236 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b76p7" podStartSLOduration=121.044209356 podStartE2EDuration="2m1.044209356s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:58.042926877 +0000 UTC m=+147.700825639" watchObservedRunningTime="2026-02-16 00:08:58.044209356 +0000 UTC m=+147.702108128" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.047669 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bg97c" event={"ID":"730d66a3-4e9f-4c36-9ce0-0c7e981b36ae","Type":"ContainerStarted","Data":"e5ef66c948d2c8214de557ee95c0138f24337420d326408784b167172d66ac56"} Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.047830 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bg97c" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.055255 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbxnw" event={"ID":"2162c872-a146-4e1c-8173-103f521103f0","Type":"ContainerStarted","Data":"53244560e9bb8ad292004bd98efdaf5cdc87b3fbf995148688308cacb9b98683"} Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.061959 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zdlqd" podStartSLOduration=121.061939272 podStartE2EDuration="2m1.061939272s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:58.060011862 +0000 UTC m=+147.717910624" watchObservedRunningTime="2026-02-16 00:08:58.061939272 +0000 UTC m=+147.719838034" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.073118 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xks74" event={"ID":"76dae780-f238-4e3f-9e31-38a47f1e1991","Type":"ContainerStarted","Data":"583046c9da4a421d3a97eb22949e828417ec5daac0f86fb8f119d38e5fd7aec2"} Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.079404 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-4qvcf" podStartSLOduration=121.079382724 podStartE2EDuration="2m1.079382724s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:58.075030141 +0000 UTC m=+147.732928903" watchObservedRunningTime="2026-02-16 00:08:58.079382724 +0000 UTC m=+147.737281486" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.084079 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bhfmf" event={"ID":"f5b48014-cea0-4a23-80a0-0022370c5e7c","Type":"ContainerStarted","Data":"94169fc70c4ebdf1191d17bbdfb1769e66e4f07cec42fd0d18ceb02a10f1a2d9"} Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.095973 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-g6jpt" event={"ID":"7292f832-f19e-49fb-9d13-7af3c57d876c","Type":"ContainerStarted","Data":"17663ea73a8d0eac6c3fb56b8e6b0154787c0e0b77fd8ac8e6b8f0bb03708774"} Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.098011 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54" event={"ID":"13e3631d-31f8-4769-b931-22e47b7f9e14","Type":"ContainerStarted","Data":"9eaeb205e21204978444039c4db99140caf3e19b66574d9df81fd0f6d37a3a99"} Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.099869 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t8g5k" event={"ID":"965a4f3a-6e82-46eb-964a-4c7f40fdaf0e","Type":"ContainerStarted","Data":"3b95ad142779589533728c162643a33fb695ab9235368583787fd2884a1a3c1a"} Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.100703 4698 patch_prober.go:28] interesting pod/console-operator-58897d9998-bsd9j container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.100751 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bsd9j" podUID="3536e99a-ec06-422f-9944-20d3e4eca295" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.101863 4698 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qp9bm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.101941 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm" podUID="7af2ddd8-b028-4713-8550-6cac706db73f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.102385 4698 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ckzvr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.102419 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" podUID="d5ba3ac6-6e8d-4965-81f1-c1805efed27f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.106977 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" podStartSLOduration=120.106953617 podStartE2EDuration="2m0.106953617s" podCreationTimestamp="2026-02-16 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:58.10271016 +0000 UTC m=+147.760608932" watchObservedRunningTime="2026-02-16 00:08:58.106953617 +0000 UTC m=+147.764852379" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.139179 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:58 crc kubenswrapper[4698]: E0216 00:08:58.140541 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:58.640495379 +0000 UTC m=+148.298394131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.141261 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:58 crc kubenswrapper[4698]: E0216 00:08:58.142654 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:58.642639689 +0000 UTC m=+148.300538441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.143406 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bg97c" podStartSLOduration=121.143371483 podStartE2EDuration="2m1.143371483s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:58.127966876 +0000 UTC m=+147.785865638" watchObservedRunningTime="2026-02-16 00:08:58.143371483 +0000 UTC m=+147.801270245" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.170505 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-v8kwd" podStartSLOduration=121.170476574 podStartE2EDuration="2m1.170476574s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:58.164699475 +0000 UTC m=+147.822598237" watchObservedRunningTime="2026-02-16 00:08:58.170476574 +0000 UTC m=+147.828375336" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.207869 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" podStartSLOduration=121.207847945 podStartE2EDuration="2m1.207847945s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:58.205552808 +0000 UTC m=+147.863451560" watchObservedRunningTime="2026-02-16 00:08:58.207847945 +0000 UTC m=+147.865746707" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.237368 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-dc98z" podStartSLOduration=8.237342937 podStartE2EDuration="8.237342937s" podCreationTimestamp="2026-02-16 00:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:58.236055618 +0000 UTC m=+147.893954390" watchObservedRunningTime="2026-02-16 00:08:58.237342937 +0000 UTC m=+147.895241699" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.242795 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:58 crc kubenswrapper[4698]: E0216 00:08:58.243059 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:58.743015301 +0000 UTC m=+148.400914063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.243231 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:58 crc kubenswrapper[4698]: E0216 00:08:58.244869 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:58.744848607 +0000 UTC m=+148.402747569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.257580 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbxnw" podStartSLOduration=121.257561029 podStartE2EDuration="2m1.257561029s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:58.256048959 +0000 UTC m=+147.913947721" watchObservedRunningTime="2026-02-16 00:08:58.257561029 +0000 UTC m=+147.915459791" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.278530 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t8g5k" podStartSLOduration=121.278507924 podStartE2EDuration="2m1.278507924s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:58.277716677 +0000 UTC m=+147.935615439" watchObservedRunningTime="2026-02-16 00:08:58.278507924 +0000 UTC m=+147.936406686" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.299386 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cbn44" podStartSLOduration=120.299367155 podStartE2EDuration="2m0.299367155s" podCreationTimestamp="2026-02-16 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:58.298262914 +0000 UTC m=+147.956161686" watchObservedRunningTime="2026-02-16 00:08:58.299367155 +0000 UTC m=+147.957265917" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.319886 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-x6shs" podStartSLOduration=121.31986549 podStartE2EDuration="2m1.31986549s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:58.317131382 +0000 UTC m=+147.975030154" watchObservedRunningTime="2026-02-16 00:08:58.31986549 +0000 UTC m=+147.977764252" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.339428 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g9bpw" podStartSLOduration=121.339402689 podStartE2EDuration="2m1.339402689s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:58.334379556 +0000 UTC m=+147.992278318" watchObservedRunningTime="2026-02-16 00:08:58.339402689 +0000 UTC m=+147.997301451" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.350528 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:58 crc kubenswrapper[4698]: E0216 00:08:58.351129 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:58.851095734 +0000 UTC m=+148.508994496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.453014 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:58 crc kubenswrapper[4698]: E0216 00:08:58.453466 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:58.953448018 +0000 UTC m=+148.611346780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.554712 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:58 crc kubenswrapper[4698]: E0216 00:08:58.554861 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:59.054827818 +0000 UTC m=+148.712726580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.555177 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:58 crc kubenswrapper[4698]: E0216 00:08:58.555675 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:59.055655897 +0000 UTC m=+148.713554659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.656565 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:58 crc kubenswrapper[4698]: E0216 00:08:58.657084 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:59.157037647 +0000 UTC m=+148.814936419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.759638 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:58 crc kubenswrapper[4698]: E0216 00:08:58.760215 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:59.260186569 +0000 UTC m=+148.918085331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.843109 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:08:58 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:08:58 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:08:58 crc kubenswrapper[4698]: healthz check failed Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.843205 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.866704 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:58 crc kubenswrapper[4698]: E0216 00:08:58.867277 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:59.367257133 +0000 UTC m=+149.025155895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:58 crc kubenswrapper[4698]: I0216 00:08:58.968506 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:58 crc kubenswrapper[4698]: E0216 00:08:58.969413 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:59.469385178 +0000 UTC m=+149.127283940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.071036 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:59 crc kubenswrapper[4698]: E0216 00:08:59.071286 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:59.571246261 +0000 UTC m=+149.229145023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.071578 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:59 crc kubenswrapper[4698]: E0216 00:08:59.072097 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:59.572078159 +0000 UTC m=+149.229977141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.108190 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fh5qr" event={"ID":"260201f8-5e3b-4ba4-a945-13fe10a8ad3a","Type":"ContainerStarted","Data":"b56470ecda0a6209360be9881de8667e6e08a0159feca5821736d5fed16551ec"} Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.110125 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xks74" event={"ID":"76dae780-f238-4e3f-9e31-38a47f1e1991","Type":"ContainerStarted","Data":"eb9180e44238d9bae817a6b93527376bf02191224c9c0584e7126c8f95384507"} Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.112289 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-db2vn" event={"ID":"63fb50bd-c7be-4229-80c6-26017b6bac3b","Type":"ContainerStarted","Data":"7fa822ea75e31674837c14fe6f6d5a59df49e9a3ae5e25f74173d5255c14437d"} Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.114257 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bhfmf" event={"ID":"f5b48014-cea0-4a23-80a0-0022370c5e7c","Type":"ContainerStarted","Data":"64291393d1c84a7664cd4e93fae0bcb527b3a23e31db5d0e24c1976261dd9309"} Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.116261 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cmfwl" event={"ID":"78a2ec7b-781f-4ec4-8920-377f90b037fb","Type":"ContainerStarted","Data":"9ed926f9b4e59879c464f1f148a096d397e604f361b6d3dd69dfdf44b95f8156"} Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.119644 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" event={"ID":"517bde6b-579b-4047-a627-315b3722d147","Type":"ContainerStarted","Data":"376a1e277014ce20cefe7abbbb4100f514fc9c8652b3df2e00b2bbd6a2d61cc2"} Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.121723 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl" event={"ID":"d8ffb0b2-4e5e-4d23-b290-65fdbf90daa1","Type":"ContainerStarted","Data":"5502de2e6721956321389a7af7a600dd0d6f225683dca090e2b84c94f10bb2b7"} Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.122909 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-4qvcf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.122953 4698 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qp9bm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.122959 4698 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wdfct container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.122974 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4qvcf" podUID="ec24b0ba-9563-4228-af90-7774e49f5505" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.123005 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm" podUID="7af2ddd8-b028-4713-8550-6cac706db73f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.123023 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" podUID="9679204c-d5b6-489d-9f27-d84d360284ae" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.123034 4698 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rn7cx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.123125 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" podUID="96145a82-f664-45ba-805c-3721f813c8a9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.156467 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bhfmf" podStartSLOduration=122.156443987 podStartE2EDuration="2m2.156443987s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:59.13891855 +0000 UTC m=+148.796817322" watchObservedRunningTime="2026-02-16 00:08:59.156443987 +0000 UTC m=+148.814342759" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.157681 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54" podStartSLOduration=121.157670644 podStartE2EDuration="2m1.157670644s" podCreationTimestamp="2026-02-16 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:59.155288653 +0000 UTC m=+148.813187415" watchObservedRunningTime="2026-02-16 00:08:59.157670644 +0000 UTC m=+148.815569406" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.179839 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:59 crc kubenswrapper[4698]: E0216 00:08:59.180349 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:59.680317419 +0000 UTC m=+149.338216181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.180444 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:59 crc kubenswrapper[4698]: E0216 00:08:59.181035 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:59.68100175 +0000 UTC m=+149.338900512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.186152 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5sr9r" podStartSLOduration=121.186120869 podStartE2EDuration="2m1.186120869s" podCreationTimestamp="2026-02-16 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:59.179402676 +0000 UTC m=+148.837301448" watchObservedRunningTime="2026-02-16 00:08:59.186120869 +0000 UTC m=+148.844019631" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.205580 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-xt8xd" podStartSLOduration=121.205550193 podStartE2EDuration="2m1.205550193s" podCreationTimestamp="2026-02-16 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:59.198747536 +0000 UTC m=+148.856646318" watchObservedRunningTime="2026-02-16 00:08:59.205550193 +0000 UTC m=+148.863448945" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.262268 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvkhl" podStartSLOduration=122.262242932 podStartE2EDuration="2m2.262242932s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:59.257777524 +0000 UTC m=+148.915676296" watchObservedRunningTime="2026-02-16 00:08:59.262242932 +0000 UTC m=+148.920141694" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.281811 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:59 crc kubenswrapper[4698]: E0216 00:08:59.281982 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:59.78195164 +0000 UTC m=+149.439850402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.282293 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.282959 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.283029 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.283465 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.283808 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:59 crc kubenswrapper[4698]: E0216 00:08:59.285291 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:59.785275715 +0000 UTC m=+149.443174687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.293324 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.304846 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.310777 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.328211 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.390988 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:59 crc kubenswrapper[4698]: E0216 00:08:59.391560 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:08:59.891531231 +0000 UTC m=+149.549429993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.394543 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d4pz6" podStartSLOduration=122.394528541 podStartE2EDuration="2m2.394528541s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:59.393165408 +0000 UTC m=+149.051064170" watchObservedRunningTime="2026-02-16 00:08:59.394528541 +0000 UTC m=+149.052427303" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.395966 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-g6jpt" podStartSLOduration=121.395958838 podStartE2EDuration="2m1.395958838s" podCreationTimestamp="2026-02-16 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:59.316225505 +0000 UTC m=+148.974124267" watchObservedRunningTime="2026-02-16 00:08:59.395958838 +0000 UTC m=+149.053857600" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.493090 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:59 crc kubenswrapper[4698]: E0216 00:08:59.493777 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:08:59.993756951 +0000 UTC m=+149.651655713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.547838 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.568258 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.576531 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.599381 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:59 crc kubenswrapper[4698]: E0216 00:08:59.599537 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:00.099513464 +0000 UTC m=+149.757412226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.599802 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:59 crc kubenswrapper[4698]: E0216 00:08:59.600152 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:00.100144374 +0000 UTC m=+149.758043126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.701448 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:59 crc kubenswrapper[4698]: E0216 00:08:59.701710 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:00.201662349 +0000 UTC m=+149.859561111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.701869 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:59 crc kubenswrapper[4698]: E0216 00:08:59.702410 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:00.202394944 +0000 UTC m=+149.860293706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.803337 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:59 crc kubenswrapper[4698]: E0216 00:08:59.803512 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:00.30348355 +0000 UTC m=+149.961382312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.803709 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:08:59 crc kubenswrapper[4698]: E0216 00:08:59.804136 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:00.30412587 +0000 UTC m=+149.962024642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.858066 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:08:59 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:08:59 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:08:59 crc kubenswrapper[4698]: healthz check failed Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.858679 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:08:59 crc kubenswrapper[4698]: I0216 00:08:59.916141 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:08:59 crc kubenswrapper[4698]: E0216 00:08:59.916637 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:00.416598766 +0000 UTC m=+150.074497528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.019709 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:00 crc kubenswrapper[4698]: E0216 00:09:00.020253 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:00.520227791 +0000 UTC m=+150.178126553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.052999 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" podStartSLOduration=122.052970205 podStartE2EDuration="2m2.052970205s" podCreationTimestamp="2026-02-16 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:08:59.430263334 +0000 UTC m=+149.088162096" watchObservedRunningTime="2026-02-16 00:09:00.052970205 +0000 UTC m=+149.710868967" Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.124514 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:00 crc kubenswrapper[4698]: E0216 00:09:00.125013 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:00.624984007 +0000 UTC m=+150.282882759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.148939 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"baff3585565cc5c5e5b835d24dd7554f27ea0a76a1e5cf6229bf3a5a18141ad1"} Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.158283 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ngf8q" event={"ID":"0aa34695-49b9-4a9f-bec0-db46d80d3f64","Type":"ContainerStarted","Data":"b4bcfaaa763b7a0c50aee4a07cad4ded19d80ff269833d5537dac2206f3faaf9"} Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.158798 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-ngf8q" Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.164829 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cmfwl" event={"ID":"78a2ec7b-781f-4ec4-8920-377f90b037fb","Type":"ContainerStarted","Data":"05da273596509a9e091d026fe04db1d3fe5407fd100732abdef82ac18f917622"} Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.164875 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cmfwl" Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.165791 4698 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wdfct container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.165818 4698 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rn7cx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.165919 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" podUID="96145a82-f664-45ba-805c-3721f813c8a9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.165829 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" podUID="9679204c-d5b6-489d-9f27-d84d360284ae" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.193390 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-ngf8q" podStartSLOduration=9.193360791 podStartE2EDuration="9.193360791s" podCreationTimestamp="2026-02-16 00:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:09:00.193025915 +0000 UTC m=+149.850924677" watchObservedRunningTime="2026-02-16 00:09:00.193360791 +0000 UTC m=+149.851259563" Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.216322 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xks74" podStartSLOduration=123.216298189 podStartE2EDuration="2m3.216298189s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:09:00.213022086 +0000 UTC m=+149.870920858" watchObservedRunningTime="2026-02-16 00:09:00.216298189 +0000 UTC m=+149.874196951" Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.227466 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:00 crc kubenswrapper[4698]: E0216 00:09:00.252190 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:00.752139578 +0000 UTC m=+150.410038340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.259042 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-db2vn" podStartSLOduration=123.259002917 podStartE2EDuration="2m3.259002917s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:09:00.248041426 +0000 UTC m=+149.905940198" watchObservedRunningTime="2026-02-16 00:09:00.259002917 +0000 UTC m=+149.916901679" Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.297000 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cmfwl" podStartSLOduration=122.296977685 podStartE2EDuration="2m2.296977685s" podCreationTimestamp="2026-02-16 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:09:00.296270542 +0000 UTC m=+149.954169304" watchObservedRunningTime="2026-02-16 00:09:00.296977685 +0000 UTC m=+149.954876447" Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.325275 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" podStartSLOduration=123.325256322 podStartE2EDuration="2m3.325256322s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:09:00.323015827 +0000 UTC m=+149.980914589" watchObservedRunningTime="2026-02-16 00:09:00.325256322 +0000 UTC m=+149.983155084" Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.331442 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:00 crc kubenswrapper[4698]: E0216 00:09:00.331754 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:00.831720042 +0000 UTC m=+150.489618804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.332211 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:00 crc kubenswrapper[4698]: E0216 00:09:00.332715 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:00.832671207 +0000 UTC m=+150.490569969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.368663 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fh5qr" podStartSLOduration=123.368632621 podStartE2EDuration="2m3.368632621s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:09:00.367401904 +0000 UTC m=+150.025300666" watchObservedRunningTime="2026-02-16 00:09:00.368632621 +0000 UTC m=+150.026531383" Feb 16 00:09:00 crc kubenswrapper[4698]: W0216 00:09:00.399469 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-dcb5d3a774eb1c0a31983a8946726d73b313788b58ee230a896c6c89b1c4e7c1 WatchSource:0}: Error finding container dcb5d3a774eb1c0a31983a8946726d73b313788b58ee230a896c6c89b1c4e7c1: Status 404 returned error can't find the container with id dcb5d3a774eb1c0a31983a8946726d73b313788b58ee230a896c6c89b1c4e7c1 Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.433206 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:00 crc kubenswrapper[4698]: E0216 00:09:00.433396 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:00.933357904 +0000 UTC m=+150.591256666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.433720 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:00 crc kubenswrapper[4698]: E0216 00:09:00.434413 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:00.934397383 +0000 UTC m=+150.592296145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.534726 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:00 crc kubenswrapper[4698]: E0216 00:09:00.534966 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:01.034928083 +0000 UTC m=+150.692826845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.535043 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:00 crc kubenswrapper[4698]: E0216 00:09:00.535492 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:01.035481999 +0000 UTC m=+150.693380761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.597728 4698 csr.go:261] certificate signing request csr-7gxnd is approved, waiting to be issued Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.609087 4698 csr.go:257] certificate signing request csr-7gxnd is issued Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.636146 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:00 crc kubenswrapper[4698]: E0216 00:09:00.636348 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:01.136312562 +0000 UTC m=+150.794211324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.636555 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:00 crc kubenswrapper[4698]: E0216 00:09:00.637099 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:01.137074937 +0000 UTC m=+150.794973699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.737709 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:00 crc kubenswrapper[4698]: E0216 00:09:00.737960 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:01.237908662 +0000 UTC m=+150.895807424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.738126 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:00 crc kubenswrapper[4698]: E0216 00:09:00.738751 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:01.23873926 +0000 UTC m=+150.896638022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.840791 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:00 crc kubenswrapper[4698]: E0216 00:09:00.841009 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:01.34097457 +0000 UTC m=+150.998873332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.841196 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:00 crc kubenswrapper[4698]: E0216 00:09:00.841826 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:01.341803628 +0000 UTC m=+150.999702390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.843517 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:09:00 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:09:00 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:09:00 crc kubenswrapper[4698]: healthz check failed Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.843601 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.942240 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:00 crc kubenswrapper[4698]: E0216 00:09:00.942477 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:01.442439464 +0000 UTC m=+151.100338226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:00 crc kubenswrapper[4698]: I0216 00:09:00.942686 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:00 crc kubenswrapper[4698]: E0216 00:09:00.943086 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:01.443069643 +0000 UTC m=+151.100968405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.044155 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:01 crc kubenswrapper[4698]: E0216 00:09:01.044415 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:01.5443695 +0000 UTC m=+151.202268262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.045131 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:01 crc kubenswrapper[4698]: E0216 00:09:01.045568 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:01.545556744 +0000 UTC m=+151.203455506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.147178 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:01 crc kubenswrapper[4698]: E0216 00:09:01.147678 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:01.647606336 +0000 UTC m=+151.305505098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.173305 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"663ca17c4216ef746d8be167f47aa573d93e8b1778ffe8b43b9ef0931fcafffe"} Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.173396 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"dcb5d3a774eb1c0a31983a8946726d73b313788b58ee230a896c6c89b1c4e7c1"} Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.175429 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b1c51acc1f45da31ceb71e39ecbf7908f21815b4b23330f7f279bec9cd2f0ed1"} Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.175562 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"58dbd461e5b453cc8fdf165a561e59176f6aa0a81c26d4e3ad10b6bf167089f8"} Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.175875 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.177176 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" event={"ID":"291f913a-6566-409f-8663-e2f695edf9a6","Type":"ContainerStarted","Data":"047a64c04d2f03e85b03818b975ea48ed7ebd37e9e6716415193cfbaf28916bf"} Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.179018 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"6b6be347c8195e9ea6d68f441894670bbb2cf33ca13ddeac56ba144582eea5c0"} Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.249305 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:01 crc kubenswrapper[4698]: E0216 00:09:01.250384 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:01.750358539 +0000 UTC m=+151.408257501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.350349 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:01 crc kubenswrapper[4698]: E0216 00:09:01.350933 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:01.850879459 +0000 UTC m=+151.508778221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.452891 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:01 crc kubenswrapper[4698]: E0216 00:09:01.453452 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:01.953432634 +0000 UTC m=+151.611331396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.554500 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:01 crc kubenswrapper[4698]: E0216 00:09:01.554726 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:02.054692818 +0000 UTC m=+151.712591580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.555312 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:01 crc kubenswrapper[4698]: E0216 00:09:01.555836 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:02.05581824 +0000 UTC m=+151.713717002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.610508 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-16 00:04:00 +0000 UTC, rotation deadline is 2026-12-16 15:19:37.709880189 +0000 UTC Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.610723 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7287h10m36.099161073s for next certificate rotation Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.657010 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:01 crc kubenswrapper[4698]: E0216 00:09:01.657589 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:02.157569177 +0000 UTC m=+151.815467939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.758773 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:01 crc kubenswrapper[4698]: E0216 00:09:01.760070 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:02.260055728 +0000 UTC m=+151.917954490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.842702 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:09:01 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:09:01 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:09:01 crc kubenswrapper[4698]: healthz check failed Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.843060 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.861342 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:01 crc kubenswrapper[4698]: E0216 00:09:01.861672 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:02.361637488 +0000 UTC m=+152.019536250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.861886 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:01 crc kubenswrapper[4698]: E0216 00:09:01.862338 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:02.362312519 +0000 UTC m=+152.020211281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.963142 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:01 crc kubenswrapper[4698]: E0216 00:09:01.963304 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:02.46327336 +0000 UTC m=+152.121172122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:01 crc kubenswrapper[4698]: I0216 00:09:01.963764 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:01 crc kubenswrapper[4698]: E0216 00:09:01.964201 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:02.464184551 +0000 UTC m=+152.122083313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.048275 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vrkd2"] Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.058026 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrkd2" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.063729 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.065123 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:02 crc kubenswrapper[4698]: E0216 00:09:02.065566 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:02.56554538 +0000 UTC m=+152.223444142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.077290 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vrkd2"] Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.167600 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.167714 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbbdz\" (UniqueName: \"kubernetes.io/projected/b2921317-af1c-4c00-b999-99897d66aaba-kube-api-access-zbbdz\") pod \"certified-operators-vrkd2\" (UID: \"b2921317-af1c-4c00-b999-99897d66aaba\") " pod="openshift-marketplace/certified-operators-vrkd2" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.167747 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2921317-af1c-4c00-b999-99897d66aaba-utilities\") pod \"certified-operators-vrkd2\" (UID: \"b2921317-af1c-4c00-b999-99897d66aaba\") " pod="openshift-marketplace/certified-operators-vrkd2" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.167808 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2921317-af1c-4c00-b999-99897d66aaba-catalog-content\") pod \"certified-operators-vrkd2\" (UID: \"b2921317-af1c-4c00-b999-99897d66aaba\") " pod="openshift-marketplace/certified-operators-vrkd2" Feb 16 00:09:02 crc kubenswrapper[4698]: E0216 00:09:02.168083 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:02.668057052 +0000 UTC m=+152.325956005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.188400 4698 generic.go:334] "Generic (PLEG): container finished" podID="d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96" containerID="7001aa0d8311fdc1ccb48532abb3beb6619577ea3e98362f5338634a0001fb24" exitCode=0 Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.188482 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt" event={"ID":"d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96","Type":"ContainerDied","Data":"7001aa0d8311fdc1ccb48532abb3beb6619577ea3e98362f5338634a0001fb24"} Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.227475 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4644p"] Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.229153 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4644p" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.231185 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.247448 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4644p"] Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.268512 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:02 crc kubenswrapper[4698]: E0216 00:09:02.268725 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:02.768689727 +0000 UTC m=+152.426588499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.269167 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.269307 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d741b08c-0e5a-40aa-ba0b-6f11743daa22-utilities\") pod \"community-operators-4644p\" (UID: \"d741b08c-0e5a-40aa-ba0b-6f11743daa22\") " pod="openshift-marketplace/community-operators-4644p" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.269405 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbbdz\" (UniqueName: \"kubernetes.io/projected/b2921317-af1c-4c00-b999-99897d66aaba-kube-api-access-zbbdz\") pod \"certified-operators-vrkd2\" (UID: \"b2921317-af1c-4c00-b999-99897d66aaba\") " pod="openshift-marketplace/certified-operators-vrkd2" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.269485 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d741b08c-0e5a-40aa-ba0b-6f11743daa22-catalog-content\") pod \"community-operators-4644p\" (UID: \"d741b08c-0e5a-40aa-ba0b-6f11743daa22\") " pod="openshift-marketplace/community-operators-4644p" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.269560 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2921317-af1c-4c00-b999-99897d66aaba-utilities\") pod \"certified-operators-vrkd2\" (UID: \"b2921317-af1c-4c00-b999-99897d66aaba\") " pod="openshift-marketplace/certified-operators-vrkd2" Feb 16 00:09:02 crc kubenswrapper[4698]: E0216 00:09:02.269642 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:02.769630542 +0000 UTC m=+152.427529304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.269812 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mczph\" (UniqueName: \"kubernetes.io/projected/d741b08c-0e5a-40aa-ba0b-6f11743daa22-kube-api-access-mczph\") pod \"community-operators-4644p\" (UID: \"d741b08c-0e5a-40aa-ba0b-6f11743daa22\") " pod="openshift-marketplace/community-operators-4644p" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.269904 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2921317-af1c-4c00-b999-99897d66aaba-catalog-content\") pod \"certified-operators-vrkd2\" (UID: \"b2921317-af1c-4c00-b999-99897d66aaba\") " pod="openshift-marketplace/certified-operators-vrkd2" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.270255 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2921317-af1c-4c00-b999-99897d66aaba-utilities\") pod \"certified-operators-vrkd2\" (UID: \"b2921317-af1c-4c00-b999-99897d66aaba\") " pod="openshift-marketplace/certified-operators-vrkd2" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.270498 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2921317-af1c-4c00-b999-99897d66aaba-catalog-content\") pod \"certified-operators-vrkd2\" (UID: \"b2921317-af1c-4c00-b999-99897d66aaba\") " pod="openshift-marketplace/certified-operators-vrkd2" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.324562 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbbdz\" (UniqueName: \"kubernetes.io/projected/b2921317-af1c-4c00-b999-99897d66aaba-kube-api-access-zbbdz\") pod \"certified-operators-vrkd2\" (UID: \"b2921317-af1c-4c00-b999-99897d66aaba\") " pod="openshift-marketplace/certified-operators-vrkd2" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.371666 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:02 crc kubenswrapper[4698]: E0216 00:09:02.371869 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:02.87183387 +0000 UTC m=+152.529732652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.372287 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mczph\" (UniqueName: \"kubernetes.io/projected/d741b08c-0e5a-40aa-ba0b-6f11743daa22-kube-api-access-mczph\") pod \"community-operators-4644p\" (UID: \"d741b08c-0e5a-40aa-ba0b-6f11743daa22\") " pod="openshift-marketplace/community-operators-4644p" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.372402 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.372444 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d741b08c-0e5a-40aa-ba0b-6f11743daa22-utilities\") pod \"community-operators-4644p\" (UID: \"d741b08c-0e5a-40aa-ba0b-6f11743daa22\") " pod="openshift-marketplace/community-operators-4644p" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.372498 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d741b08c-0e5a-40aa-ba0b-6f11743daa22-catalog-content\") pod \"community-operators-4644p\" (UID: \"d741b08c-0e5a-40aa-ba0b-6f11743daa22\") " pod="openshift-marketplace/community-operators-4644p" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.373060 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d741b08c-0e5a-40aa-ba0b-6f11743daa22-catalog-content\") pod \"community-operators-4644p\" (UID: \"d741b08c-0e5a-40aa-ba0b-6f11743daa22\") " pod="openshift-marketplace/community-operators-4644p" Feb 16 00:09:02 crc kubenswrapper[4698]: E0216 00:09:02.373089 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:02.873068237 +0000 UTC m=+152.530966999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.373215 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d741b08c-0e5a-40aa-ba0b-6f11743daa22-utilities\") pod \"community-operators-4644p\" (UID: \"d741b08c-0e5a-40aa-ba0b-6f11743daa22\") " pod="openshift-marketplace/community-operators-4644p" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.387308 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrkd2" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.427924 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fxx2x"] Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.429376 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxx2x" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.431278 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mczph\" (UniqueName: \"kubernetes.io/projected/d741b08c-0e5a-40aa-ba0b-6f11743daa22-kube-api-access-mczph\") pod \"community-operators-4644p\" (UID: \"d741b08c-0e5a-40aa-ba0b-6f11743daa22\") " pod="openshift-marketplace/community-operators-4644p" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.449032 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.449963 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.454889 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.455216 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.455468 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fxx2x"] Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.475239 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.475610 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb6b8789-217e-460f-95bb-e32613af20ca-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fb6b8789-217e-460f-95bb-e32613af20ca\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.475684 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb6b8789-217e-460f-95bb-e32613af20ca-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fb6b8789-217e-460f-95bb-e32613af20ca\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.475722 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a92430cf-e02d-41ee-862e-d785decce5ec-utilities\") pod \"certified-operators-fxx2x\" (UID: \"a92430cf-e02d-41ee-862e-d785decce5ec\") " pod="openshift-marketplace/certified-operators-fxx2x" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.475745 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a92430cf-e02d-41ee-862e-d785decce5ec-catalog-content\") pod \"certified-operators-fxx2x\" (UID: \"a92430cf-e02d-41ee-862e-d785decce5ec\") " pod="openshift-marketplace/certified-operators-fxx2x" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.475786 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kjhb\" (UniqueName: \"kubernetes.io/projected/a92430cf-e02d-41ee-862e-d785decce5ec-kube-api-access-9kjhb\") pod \"certified-operators-fxx2x\" (UID: \"a92430cf-e02d-41ee-862e-d785decce5ec\") " pod="openshift-marketplace/certified-operators-fxx2x" Feb 16 00:09:02 crc kubenswrapper[4698]: E0216 00:09:02.475959 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:02.975928796 +0000 UTC m=+152.633827568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.487074 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.551960 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4644p" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.577483 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.577535 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kjhb\" (UniqueName: \"kubernetes.io/projected/a92430cf-e02d-41ee-862e-d785decce5ec-kube-api-access-9kjhb\") pod \"certified-operators-fxx2x\" (UID: \"a92430cf-e02d-41ee-862e-d785decce5ec\") " pod="openshift-marketplace/certified-operators-fxx2x" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.577571 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb6b8789-217e-460f-95bb-e32613af20ca-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fb6b8789-217e-460f-95bb-e32613af20ca\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.577632 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb6b8789-217e-460f-95bb-e32613af20ca-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fb6b8789-217e-460f-95bb-e32613af20ca\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.577671 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a92430cf-e02d-41ee-862e-d785decce5ec-utilities\") pod \"certified-operators-fxx2x\" (UID: \"a92430cf-e02d-41ee-862e-d785decce5ec\") " pod="openshift-marketplace/certified-operators-fxx2x" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.577691 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a92430cf-e02d-41ee-862e-d785decce5ec-catalog-content\") pod \"certified-operators-fxx2x\" (UID: \"a92430cf-e02d-41ee-862e-d785decce5ec\") " pod="openshift-marketplace/certified-operators-fxx2x" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.578114 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a92430cf-e02d-41ee-862e-d785decce5ec-catalog-content\") pod \"certified-operators-fxx2x\" (UID: \"a92430cf-e02d-41ee-862e-d785decce5ec\") " pod="openshift-marketplace/certified-operators-fxx2x" Feb 16 00:09:02 crc kubenswrapper[4698]: E0216 00:09:02.578448 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:03.078433359 +0000 UTC m=+152.736332121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.579017 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb6b8789-217e-460f-95bb-e32613af20ca-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fb6b8789-217e-460f-95bb-e32613af20ca\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.579260 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a92430cf-e02d-41ee-862e-d785decce5ec-utilities\") pod \"certified-operators-fxx2x\" (UID: \"a92430cf-e02d-41ee-862e-d785decce5ec\") " pod="openshift-marketplace/certified-operators-fxx2x" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.607855 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb6b8789-217e-460f-95bb-e32613af20ca-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fb6b8789-217e-460f-95bb-e32613af20ca\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.613665 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kjhb\" (UniqueName: \"kubernetes.io/projected/a92430cf-e02d-41ee-862e-d785decce5ec-kube-api-access-9kjhb\") pod \"certified-operators-fxx2x\" (UID: \"a92430cf-e02d-41ee-862e-d785decce5ec\") " pod="openshift-marketplace/certified-operators-fxx2x" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.652484 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xmh8j"] Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.653560 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xmh8j" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.672639 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xmh8j"] Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.673030 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.678449 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.678800 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqhnt\" (UniqueName: \"kubernetes.io/projected/5308c07c-9d3d-4ead-8c6e-19c51adf5228-kube-api-access-pqhnt\") pod \"community-operators-xmh8j\" (UID: \"5308c07c-9d3d-4ead-8c6e-19c51adf5228\") " pod="openshift-marketplace/community-operators-xmh8j" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.678847 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5308c07c-9d3d-4ead-8c6e-19c51adf5228-catalog-content\") pod \"community-operators-xmh8j\" (UID: \"5308c07c-9d3d-4ead-8c6e-19c51adf5228\") " pod="openshift-marketplace/community-operators-xmh8j" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.678886 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5308c07c-9d3d-4ead-8c6e-19c51adf5228-utilities\") pod \"community-operators-xmh8j\" (UID: \"5308c07c-9d3d-4ead-8c6e-19c51adf5228\") " pod="openshift-marketplace/community-operators-xmh8j" Feb 16 00:09:02 crc kubenswrapper[4698]: E0216 00:09:02.679005 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:03.17898428 +0000 UTC m=+152.836883042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.782203 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxx2x" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.783024 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.783071 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5308c07c-9d3d-4ead-8c6e-19c51adf5228-utilities\") pod \"community-operators-xmh8j\" (UID: \"5308c07c-9d3d-4ead-8c6e-19c51adf5228\") " pod="openshift-marketplace/community-operators-xmh8j" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.783179 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqhnt\" (UniqueName: \"kubernetes.io/projected/5308c07c-9d3d-4ead-8c6e-19c51adf5228-kube-api-access-pqhnt\") pod \"community-operators-xmh8j\" (UID: \"5308c07c-9d3d-4ead-8c6e-19c51adf5228\") " pod="openshift-marketplace/community-operators-xmh8j" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.783218 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5308c07c-9d3d-4ead-8c6e-19c51adf5228-catalog-content\") pod \"community-operators-xmh8j\" (UID: \"5308c07c-9d3d-4ead-8c6e-19c51adf5228\") " pod="openshift-marketplace/community-operators-xmh8j" Feb 16 00:09:02 crc kubenswrapper[4698]: E0216 00:09:02.783383 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:03.28336489 +0000 UTC m=+152.941263652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.784198 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5308c07c-9d3d-4ead-8c6e-19c51adf5228-catalog-content\") pod \"community-operators-xmh8j\" (UID: \"5308c07c-9d3d-4ead-8c6e-19c51adf5228\") " pod="openshift-marketplace/community-operators-xmh8j" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.784344 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5308c07c-9d3d-4ead-8c6e-19c51adf5228-utilities\") pod \"community-operators-xmh8j\" (UID: \"5308c07c-9d3d-4ead-8c6e-19c51adf5228\") " pod="openshift-marketplace/community-operators-xmh8j" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.800631 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.825288 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqhnt\" (UniqueName: \"kubernetes.io/projected/5308c07c-9d3d-4ead-8c6e-19c51adf5228-kube-api-access-pqhnt\") pod \"community-operators-xmh8j\" (UID: \"5308c07c-9d3d-4ead-8c6e-19c51adf5228\") " pod="openshift-marketplace/community-operators-xmh8j" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.854813 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:09:02 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:09:02 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:09:02 crc kubenswrapper[4698]: healthz check failed Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.854879 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.889287 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:02 crc kubenswrapper[4698]: E0216 00:09:02.889658 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:03.389593805 +0000 UTC m=+153.047492567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.911527 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:02 crc kubenswrapper[4698]: E0216 00:09:02.921183 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:03.421153453 +0000 UTC m=+153.079052215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:02 crc kubenswrapper[4698]: I0216 00:09:02.978194 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xmh8j" Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.013141 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:03 crc kubenswrapper[4698]: E0216 00:09:03.015325 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:03.513786956 +0000 UTC m=+153.171685718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.117831 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:03 crc kubenswrapper[4698]: E0216 00:09:03.118636 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:03.618600756 +0000 UTC m=+153.276499518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.167524 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vrkd2"] Feb 16 00:09:03 crc kubenswrapper[4698]: W0216 00:09:03.206773 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2921317_af1c_4c00_b999_99897d66aaba.slice/crio-7c0268a105712997d221047c10d1cc27e4177f72a683b32afc3b5fa4f7a71f86 WatchSource:0}: Error finding container 7c0268a105712997d221047c10d1cc27e4177f72a683b32afc3b5fa4f7a71f86: Status 404 returned error can't find the container with id 7c0268a105712997d221047c10d1cc27e4177f72a683b32afc3b5fa4f7a71f86 Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.220422 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:03 crc kubenswrapper[4698]: E0216 00:09:03.220567 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:03.720541953 +0000 UTC m=+153.378440705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.220743 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:03 crc kubenswrapper[4698]: E0216 00:09:03.221184 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:03.721169651 +0000 UTC m=+153.379068413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.272421 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bg97c" Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.327179 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:03 crc kubenswrapper[4698]: E0216 00:09:03.328718 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:03.828692947 +0000 UTC m=+153.486591709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.432894 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:03 crc kubenswrapper[4698]: E0216 00:09:03.433259 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:03.933244665 +0000 UTC m=+153.591143427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.533899 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:03 crc kubenswrapper[4698]: E0216 00:09:03.534955 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:04.034932179 +0000 UTC m=+153.692830941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.553270 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4644p"] Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.578142 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.579093 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.590223 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.612867 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.649191 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.650451 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:09:03 crc kubenswrapper[4698]: E0216 00:09:03.650571 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:04.150553152 +0000 UTC m=+153.808451914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.660778 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.660822 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.670254 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-bsd9j" Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.679530 4698 patch_prober.go:28] interesting pod/console-f9d7485db-x6shs container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.679594 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x6shs" podUID="4678f0b3-74d6-4ea2-9294-6c9bc5e9de25" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.751433 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:03 crc kubenswrapper[4698]: E0216 00:09:03.753243 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:04.253221251 +0000 UTC m=+153.911120013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.774081 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.775176 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.830052 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.841591 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.861908 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.872138 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:09:03 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:09:03 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:09:03 crc kubenswrapper[4698]: healthz check failed Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.872229 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:03 crc kubenswrapper[4698]: E0216 00:09:03.877363 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:04.37733836 +0000 UTC m=+154.035237122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:03 crc kubenswrapper[4698]: I0216 00:09:03.964958 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:03 crc kubenswrapper[4698]: E0216 00:09:03.965985 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:04.465956825 +0000 UTC m=+154.123855587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.068679 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:04 crc kubenswrapper[4698]: E0216 00:09:04.069036 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:04.569023173 +0000 UTC m=+154.226921935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.077709 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.078706 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fxx2x"] Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.125120 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.158101 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qp9bm" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.169643 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96-secret-volume\") pod \"d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96\" (UID: \"d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96\") " Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.169832 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.169930 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq895\" (UniqueName: \"kubernetes.io/projected/d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96-kube-api-access-nq895\") pod \"d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96\" (UID: \"d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96\") " Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.169970 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96-config-volume\") pod \"d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96\" (UID: \"d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96\") " Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.171028 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96-config-volume" (OuterVolumeSpecName: "config-volume") pod "d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96" (UID: "d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:09:04 crc kubenswrapper[4698]: E0216 00:09:04.173001 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:04.672972483 +0000 UTC m=+154.330871245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.185782 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96" (UID: "d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.194835 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96-kube-api-access-nq895" (OuterVolumeSpecName: "kube-api-access-nq895") pod "d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96" (UID: "d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96"). InnerVolumeSpecName "kube-api-access-nq895". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.216978 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5sr9r" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.239794 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt" event={"ID":"d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96","Type":"ContainerDied","Data":"7e63ca6d87c58dcc58b21fa18f5434f40375fd87fd7a2c94c0bb43a7229e1df4"} Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.239841 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e63ca6d87c58dcc58b21fa18f5434f40375fd87fd7a2c94c0bb43a7229e1df4" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.239926 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-bqmjt" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.243686 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gqp8r"] Feb 16 00:09:04 crc kubenswrapper[4698]: E0216 00:09:04.244061 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96" containerName="collect-profiles" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.244079 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96" containerName="collect-profiles" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.244267 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96" containerName="collect-profiles" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.245362 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gqp8r" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.248533 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xmh8j"] Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.261833 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.267173 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gqp8r"] Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.273465 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bb57b35-984c-43d1-8c3e-d3311bb457f4-utilities\") pod \"redhat-marketplace-gqp8r\" (UID: \"6bb57b35-984c-43d1-8c3e-d3311bb457f4\") " pod="openshift-marketplace/redhat-marketplace-gqp8r" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.273786 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.273991 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bb57b35-984c-43d1-8c3e-d3311bb457f4-catalog-content\") pod \"redhat-marketplace-gqp8r\" (UID: \"6bb57b35-984c-43d1-8c3e-d3311bb457f4\") " pod="openshift-marketplace/redhat-marketplace-gqp8r" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.274018 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgg5v\" (UniqueName: \"kubernetes.io/projected/6bb57b35-984c-43d1-8c3e-d3311bb457f4-kube-api-access-pgg5v\") pod \"redhat-marketplace-gqp8r\" (UID: \"6bb57b35-984c-43d1-8c3e-d3311bb457f4\") " pod="openshift-marketplace/redhat-marketplace-gqp8r" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.274467 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq895\" (UniqueName: \"kubernetes.io/projected/d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96-kube-api-access-nq895\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.274488 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.274502 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0dd7da6-7a3f-46e0-8a76-fed1c00eeb96-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:04 crc kubenswrapper[4698]: E0216 00:09:04.274516 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:04.77449843 +0000 UTC m=+154.432397182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.277870 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5sr9r" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.282184 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.285779 4698 generic.go:334] "Generic (PLEG): container finished" podID="b2921317-af1c-4c00-b999-99897d66aaba" containerID="ff0d3d0126444094b27ddf6ee9d8c21060f16fc0e5971a89c45c6da7cb0794c7" exitCode=0 Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.286186 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrkd2" event={"ID":"b2921317-af1c-4c00-b999-99897d66aaba","Type":"ContainerDied","Data":"ff0d3d0126444094b27ddf6ee9d8c21060f16fc0e5971a89c45c6da7cb0794c7"} Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.286210 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrkd2" event={"ID":"b2921317-af1c-4c00-b999-99897d66aaba","Type":"ContainerStarted","Data":"7c0268a105712997d221047c10d1cc27e4177f72a683b32afc3b5fa4f7a71f86"} Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.287874 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.302115 4698 generic.go:334] "Generic (PLEG): container finished" podID="d741b08c-0e5a-40aa-ba0b-6f11743daa22" containerID="f4c9f9f7c15c78883c85c3362f231c3d2e7afdf0f5194e48f1d7672d0d8e1231" exitCode=0 Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.302209 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4644p" event={"ID":"d741b08c-0e5a-40aa-ba0b-6f11743daa22","Type":"ContainerDied","Data":"f4c9f9f7c15c78883c85c3362f231c3d2e7afdf0f5194e48f1d7672d0d8e1231"} Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.302246 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4644p" event={"ID":"d741b08c-0e5a-40aa-ba0b-6f11743daa22","Type":"ContainerStarted","Data":"e53acc45b1d612ee86912a9cec36313f34474541f320124aa515011d2e8fbf74"} Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.323139 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fb6b8789-217e-460f-95bb-e32613af20ca","Type":"ContainerStarted","Data":"f3d3b15f80a890f75c2883e8ca8f96fc04ad812fb5a403da8f14708df649bddb"} Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.326527 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxx2x" event={"ID":"a92430cf-e02d-41ee-862e-d785decce5ec","Type":"ContainerStarted","Data":"3c4bf14c55ddd713b18a1f2f0ee813a843e26722c432189fe2702ae34e738be4"} Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.329118 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-4qvcf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.329155 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4qvcf" podUID="ec24b0ba-9563-4228-af90-7774e49f5505" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.329207 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-4qvcf container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.329218 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4qvcf" podUID="ec24b0ba-9563-4228-af90-7774e49f5505" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.371935 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rzgh6" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.377264 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:04 crc kubenswrapper[4698]: E0216 00:09:04.378346 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:04.878325223 +0000 UTC m=+154.536223985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.378727 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bb57b35-984c-43d1-8c3e-d3311bb457f4-catalog-content\") pod \"redhat-marketplace-gqp8r\" (UID: \"6bb57b35-984c-43d1-8c3e-d3311bb457f4\") " pod="openshift-marketplace/redhat-marketplace-gqp8r" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.378797 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgg5v\" (UniqueName: \"kubernetes.io/projected/6bb57b35-984c-43d1-8c3e-d3311bb457f4-kube-api-access-pgg5v\") pod \"redhat-marketplace-gqp8r\" (UID: \"6bb57b35-984c-43d1-8c3e-d3311bb457f4\") " pod="openshift-marketplace/redhat-marketplace-gqp8r" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.378963 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bb57b35-984c-43d1-8c3e-d3311bb457f4-utilities\") pod \"redhat-marketplace-gqp8r\" (UID: \"6bb57b35-984c-43d1-8c3e-d3311bb457f4\") " pod="openshift-marketplace/redhat-marketplace-gqp8r" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.379014 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.379429 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bb57b35-984c-43d1-8c3e-d3311bb457f4-catalog-content\") pod \"redhat-marketplace-gqp8r\" (UID: \"6bb57b35-984c-43d1-8c3e-d3311bb457f4\") " pod="openshift-marketplace/redhat-marketplace-gqp8r" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.381897 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bb57b35-984c-43d1-8c3e-d3311bb457f4-utilities\") pod \"redhat-marketplace-gqp8r\" (UID: \"6bb57b35-984c-43d1-8c3e-d3311bb457f4\") " pod="openshift-marketplace/redhat-marketplace-gqp8r" Feb 16 00:09:04 crc kubenswrapper[4698]: E0216 00:09:04.381986 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:04.881972823 +0000 UTC m=+154.539871785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.417585 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgg5v\" (UniqueName: \"kubernetes.io/projected/6bb57b35-984c-43d1-8c3e-d3311bb457f4-kube-api-access-pgg5v\") pod \"redhat-marketplace-gqp8r\" (UID: \"6bb57b35-984c-43d1-8c3e-d3311bb457f4\") " pod="openshift-marketplace/redhat-marketplace-gqp8r" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.484494 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:04 crc kubenswrapper[4698]: E0216 00:09:04.484971 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:04.984948447 +0000 UTC m=+154.642847209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.536849 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.590290 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:04 crc kubenswrapper[4698]: E0216 00:09:04.592541 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:05.092525645 +0000 UTC m=+154.750424407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.601609 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8hk54" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.622551 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gqp8r" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.629496 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cm6sf"] Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.630693 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cm6sf" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.654365 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cm6sf"] Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.694482 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.694858 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65952246-da5c-4f4c-bb5a-a0b236b3675f-utilities\") pod \"redhat-marketplace-cm6sf\" (UID: \"65952246-da5c-4f4c-bb5a-a0b236b3675f\") " pod="openshift-marketplace/redhat-marketplace-cm6sf" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.694887 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlb7z\" (UniqueName: \"kubernetes.io/projected/65952246-da5c-4f4c-bb5a-a0b236b3675f-kube-api-access-dlb7z\") pod \"redhat-marketplace-cm6sf\" (UID: \"65952246-da5c-4f4c-bb5a-a0b236b3675f\") " pod="openshift-marketplace/redhat-marketplace-cm6sf" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.694956 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65952246-da5c-4f4c-bb5a-a0b236b3675f-catalog-content\") pod \"redhat-marketplace-cm6sf\" (UID: \"65952246-da5c-4f4c-bb5a-a0b236b3675f\") " pod="openshift-marketplace/redhat-marketplace-cm6sf" Feb 16 00:09:04 crc kubenswrapper[4698]: E0216 00:09:04.695070 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:05.195051799 +0000 UTC m=+154.852950561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.796100 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65952246-da5c-4f4c-bb5a-a0b236b3675f-catalog-content\") pod \"redhat-marketplace-cm6sf\" (UID: \"65952246-da5c-4f4c-bb5a-a0b236b3675f\") " pod="openshift-marketplace/redhat-marketplace-cm6sf" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.796389 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.796431 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65952246-da5c-4f4c-bb5a-a0b236b3675f-utilities\") pod \"redhat-marketplace-cm6sf\" (UID: \"65952246-da5c-4f4c-bb5a-a0b236b3675f\") " pod="openshift-marketplace/redhat-marketplace-cm6sf" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.796449 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlb7z\" (UniqueName: \"kubernetes.io/projected/65952246-da5c-4f4c-bb5a-a0b236b3675f-kube-api-access-dlb7z\") pod \"redhat-marketplace-cm6sf\" (UID: \"65952246-da5c-4f4c-bb5a-a0b236b3675f\") " pod="openshift-marketplace/redhat-marketplace-cm6sf" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.797236 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65952246-da5c-4f4c-bb5a-a0b236b3675f-catalog-content\") pod \"redhat-marketplace-cm6sf\" (UID: \"65952246-da5c-4f4c-bb5a-a0b236b3675f\") " pod="openshift-marketplace/redhat-marketplace-cm6sf" Feb 16 00:09:04 crc kubenswrapper[4698]: E0216 00:09:04.797502 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:05.297489627 +0000 UTC m=+154.955388389 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.797844 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65952246-da5c-4f4c-bb5a-a0b236b3675f-utilities\") pod \"redhat-marketplace-cm6sf\" (UID: \"65952246-da5c-4f4c-bb5a-a0b236b3675f\") " pod="openshift-marketplace/redhat-marketplace-cm6sf" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.816054 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlb7z\" (UniqueName: \"kubernetes.io/projected/65952246-da5c-4f4c-bb5a-a0b236b3675f-kube-api-access-dlb7z\") pod \"redhat-marketplace-cm6sf\" (UID: \"65952246-da5c-4f4c-bb5a-a0b236b3675f\") " pod="openshift-marketplace/redhat-marketplace-cm6sf" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.842638 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:09:04 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:09:04 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:09:04 crc kubenswrapper[4698]: healthz check failed Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.842699 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:04 crc kubenswrapper[4698]: I0216 00:09:04.897381 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:04 crc kubenswrapper[4698]: E0216 00:09:04.897846 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:05.397825208 +0000 UTC m=+155.055723970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.000761 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:05 crc kubenswrapper[4698]: E0216 00:09:05.001199 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:05.50118124 +0000 UTC m=+155.159080012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.017378 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cm6sf" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.093671 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.094912 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.098995 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.101481 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:05 crc kubenswrapper[4698]: E0216 00:09:05.101593 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:05.601573575 +0000 UTC m=+155.259472337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.101863 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.102020 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 00:09:05 crc kubenswrapper[4698]: E0216 00:09:05.102202 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:05.602192463 +0000 UTC m=+155.260091225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.129493 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.149519 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gqp8r"] Feb 16 00:09:05 crc kubenswrapper[4698]: W0216 00:09:05.171785 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bb57b35_984c_43d1_8c3e_d3311bb457f4.slice/crio-0330dca99d8abf690b64c63398286c45aba5b5f947f2d17c05a49675f11265c4 WatchSource:0}: Error finding container 0330dca99d8abf690b64c63398286c45aba5b5f947f2d17c05a49675f11265c4: Status 404 returned error can't find the container with id 0330dca99d8abf690b64c63398286c45aba5b5f947f2d17c05a49675f11265c4 Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.216203 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.216526 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6eac5bf-5ba6-4358-a2c3-f8697c73b765-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e6eac5bf-5ba6-4358-a2c3-f8697c73b765\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.216629 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6eac5bf-5ba6-4358-a2c3-f8697c73b765-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e6eac5bf-5ba6-4358-a2c3-f8697c73b765\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 00:09:05 crc kubenswrapper[4698]: E0216 00:09:05.216742 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:05.716725285 +0000 UTC m=+155.374624047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.274208 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bwn9c"] Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.282044 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bwn9c"] Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.282271 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bwn9c" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.290210 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.317601 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6eac5bf-5ba6-4358-a2c3-f8697c73b765-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e6eac5bf-5ba6-4358-a2c3-f8697c73b765\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.317691 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6eac5bf-5ba6-4358-a2c3-f8697c73b765-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e6eac5bf-5ba6-4358-a2c3-f8697c73b765\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.317729 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.317824 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6eac5bf-5ba6-4358-a2c3-f8697c73b765-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e6eac5bf-5ba6-4358-a2c3-f8697c73b765\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 00:09:05 crc kubenswrapper[4698]: E0216 00:09:05.318435 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:05.81842272 +0000 UTC m=+155.476321482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.356908 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqp8r" event={"ID":"6bb57b35-984c-43d1-8c3e-d3311bb457f4","Type":"ContainerStarted","Data":"0330dca99d8abf690b64c63398286c45aba5b5f947f2d17c05a49675f11265c4"} Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.366256 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fb6b8789-217e-460f-95bb-e32613af20ca","Type":"ContainerStarted","Data":"a3abc182bba26ec3da9e4256bb85897c8b1ae5be97cb45c3bf0f04a5e135dffd"} Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.371935 4698 generic.go:334] "Generic (PLEG): container finished" podID="a92430cf-e02d-41ee-862e-d785decce5ec" containerID="932e45a9e5aed578768414d709ca6585b928fbc27fcab303e139ae3223812c2c" exitCode=0 Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.372018 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxx2x" event={"ID":"a92430cf-e02d-41ee-862e-d785decce5ec","Type":"ContainerDied","Data":"932e45a9e5aed578768414d709ca6585b928fbc27fcab303e139ae3223812c2c"} Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.380055 4698 generic.go:334] "Generic (PLEG): container finished" podID="5308c07c-9d3d-4ead-8c6e-19c51adf5228" containerID="4ad3368180fc3c29dff2c4a28fa535fdda0bb75b558ceee474788142e3e6d688" exitCode=0 Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.381076 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xmh8j" event={"ID":"5308c07c-9d3d-4ead-8c6e-19c51adf5228","Type":"ContainerDied","Data":"4ad3368180fc3c29dff2c4a28fa535fdda0bb75b558ceee474788142e3e6d688"} Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.381141 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xmh8j" event={"ID":"5308c07c-9d3d-4ead-8c6e-19c51adf5228","Type":"ContainerStarted","Data":"8ace63010c1971da1ab97e319022c11a199351fa34a26351386234fa0827b901"} Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.421077 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.421351 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghqnk\" (UniqueName: \"kubernetes.io/projected/194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c-kube-api-access-ghqnk\") pod \"redhat-operators-bwn9c\" (UID: \"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c\") " pod="openshift-marketplace/redhat-operators-bwn9c" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.421379 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c-utilities\") pod \"redhat-operators-bwn9c\" (UID: \"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c\") " pod="openshift-marketplace/redhat-operators-bwn9c" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.421492 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c-catalog-content\") pod \"redhat-operators-bwn9c\" (UID: \"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c\") " pod="openshift-marketplace/redhat-operators-bwn9c" Feb 16 00:09:05 crc kubenswrapper[4698]: E0216 00:09:05.421735 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:05.921717809 +0000 UTC m=+155.579616571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.426712 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6eac5bf-5ba6-4358-a2c3-f8697c73b765-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e6eac5bf-5ba6-4358-a2c3-f8697c73b765\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.446201 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.446168178 podStartE2EDuration="3.446168178s" podCreationTimestamp="2026-02-16 00:09:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:09:05.402231681 +0000 UTC m=+155.060130443" watchObservedRunningTime="2026-02-16 00:09:05.446168178 +0000 UTC m=+155.104066950" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.450098 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.531269 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:05 crc kubenswrapper[4698]: E0216 00:09:05.533940 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:06.033919772 +0000 UTC m=+155.691818534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.534778 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c-catalog-content\") pod \"redhat-operators-bwn9c\" (UID: \"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c\") " pod="openshift-marketplace/redhat-operators-bwn9c" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.534921 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghqnk\" (UniqueName: \"kubernetes.io/projected/194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c-kube-api-access-ghqnk\") pod \"redhat-operators-bwn9c\" (UID: \"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c\") " pod="openshift-marketplace/redhat-operators-bwn9c" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.534952 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c-utilities\") pod \"redhat-operators-bwn9c\" (UID: \"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c\") " pod="openshift-marketplace/redhat-operators-bwn9c" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.535781 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c-catalog-content\") pod \"redhat-operators-bwn9c\" (UID: \"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c\") " pod="openshift-marketplace/redhat-operators-bwn9c" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.536688 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c-utilities\") pod \"redhat-operators-bwn9c\" (UID: \"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c\") " pod="openshift-marketplace/redhat-operators-bwn9c" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.568435 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghqnk\" (UniqueName: \"kubernetes.io/projected/194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c-kube-api-access-ghqnk\") pod \"redhat-operators-bwn9c\" (UID: \"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c\") " pod="openshift-marketplace/redhat-operators-bwn9c" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.634903 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dclvg"] Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.636201 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dclvg" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.637478 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:05 crc kubenswrapper[4698]: E0216 00:09:05.641576 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:06.141555154 +0000 UTC m=+155.799453916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.661999 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bwn9c" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.664727 4698 patch_prober.go:28] interesting pod/apiserver-76f77b778f-kkhhq container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 00:09:05 crc kubenswrapper[4698]: [+]log ok Feb 16 00:09:05 crc kubenswrapper[4698]: [+]etcd ok Feb 16 00:09:05 crc kubenswrapper[4698]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 00:09:05 crc kubenswrapper[4698]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 00:09:05 crc kubenswrapper[4698]: [+]poststarthook/max-in-flight-filter ok Feb 16 00:09:05 crc kubenswrapper[4698]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 00:09:05 crc kubenswrapper[4698]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 16 00:09:05 crc kubenswrapper[4698]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 16 00:09:05 crc kubenswrapper[4698]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 16 00:09:05 crc kubenswrapper[4698]: [+]poststarthook/project.openshift.io-projectcache ok Feb 16 00:09:05 crc kubenswrapper[4698]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 16 00:09:05 crc kubenswrapper[4698]: [+]poststarthook/openshift.io-startinformers ok Feb 16 00:09:05 crc kubenswrapper[4698]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 16 00:09:05 crc kubenswrapper[4698]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 00:09:05 crc kubenswrapper[4698]: livez check failed Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.664805 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" podUID="517bde6b-579b-4047-a627-315b3722d147" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.678714 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dclvg"] Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.750474 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.750528 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9888dd66-5ef5-499a-92a8-c9fd32335a20-utilities\") pod \"redhat-operators-dclvg\" (UID: \"9888dd66-5ef5-499a-92a8-c9fd32335a20\") " pod="openshift-marketplace/redhat-operators-dclvg" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.750548 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpgz5\" (UniqueName: \"kubernetes.io/projected/9888dd66-5ef5-499a-92a8-c9fd32335a20-kube-api-access-cpgz5\") pod \"redhat-operators-dclvg\" (UID: \"9888dd66-5ef5-499a-92a8-c9fd32335a20\") " pod="openshift-marketplace/redhat-operators-dclvg" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.750564 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9888dd66-5ef5-499a-92a8-c9fd32335a20-catalog-content\") pod \"redhat-operators-dclvg\" (UID: \"9888dd66-5ef5-499a-92a8-c9fd32335a20\") " pod="openshift-marketplace/redhat-operators-dclvg" Feb 16 00:09:05 crc kubenswrapper[4698]: E0216 00:09:05.750953 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:06.250928166 +0000 UTC m=+155.908826938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.803515 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cm6sf"] Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.843401 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:09:05 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:09:05 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:09:05 crc kubenswrapper[4698]: healthz check failed Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.843498 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.851457 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.851714 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpgz5\" (UniqueName: \"kubernetes.io/projected/9888dd66-5ef5-499a-92a8-c9fd32335a20-kube-api-access-cpgz5\") pod \"redhat-operators-dclvg\" (UID: \"9888dd66-5ef5-499a-92a8-c9fd32335a20\") " pod="openshift-marketplace/redhat-operators-dclvg" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.851745 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9888dd66-5ef5-499a-92a8-c9fd32335a20-catalog-content\") pod \"redhat-operators-dclvg\" (UID: \"9888dd66-5ef5-499a-92a8-c9fd32335a20\") " pod="openshift-marketplace/redhat-operators-dclvg" Feb 16 00:09:05 crc kubenswrapper[4698]: E0216 00:09:05.851817 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:06.351796942 +0000 UTC m=+156.009695704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.851902 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.851930 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9888dd66-5ef5-499a-92a8-c9fd32335a20-utilities\") pod \"redhat-operators-dclvg\" (UID: \"9888dd66-5ef5-499a-92a8-c9fd32335a20\") " pod="openshift-marketplace/redhat-operators-dclvg" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.852359 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9888dd66-5ef5-499a-92a8-c9fd32335a20-utilities\") pod \"redhat-operators-dclvg\" (UID: \"9888dd66-5ef5-499a-92a8-c9fd32335a20\") " pod="openshift-marketplace/redhat-operators-dclvg" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.852427 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9888dd66-5ef5-499a-92a8-c9fd32335a20-catalog-content\") pod \"redhat-operators-dclvg\" (UID: \"9888dd66-5ef5-499a-92a8-c9fd32335a20\") " pod="openshift-marketplace/redhat-operators-dclvg" Feb 16 00:09:05 crc kubenswrapper[4698]: E0216 00:09:05.852653 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:06.35264601 +0000 UTC m=+156.010544772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.888722 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpgz5\" (UniqueName: \"kubernetes.io/projected/9888dd66-5ef5-499a-92a8-c9fd32335a20-kube-api-access-cpgz5\") pod \"redhat-operators-dclvg\" (UID: \"9888dd66-5ef5-499a-92a8-c9fd32335a20\") " pod="openshift-marketplace/redhat-operators-dclvg" Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.953276 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:05 crc kubenswrapper[4698]: E0216 00:09:05.953668 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:06.453649013 +0000 UTC m=+156.111547775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.959311 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 00:09:05 crc kubenswrapper[4698]: I0216 00:09:05.959861 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:05 crc kubenswrapper[4698]: E0216 00:09:05.960313 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:06.460298493 +0000 UTC m=+156.118197255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:05 crc kubenswrapper[4698]: W0216 00:09:05.985914 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode6eac5bf_5ba6_4358_a2c3_f8697c73b765.slice/crio-a582f72bfe1b89b4ee9ad3fac4fb71683bd2144b280b611fe0569706cfef681d WatchSource:0}: Error finding container a582f72bfe1b89b4ee9ad3fac4fb71683bd2144b280b611fe0569706cfef681d: Status 404 returned error can't find the container with id a582f72bfe1b89b4ee9ad3fac4fb71683bd2144b280b611fe0569706cfef681d Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.018274 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dclvg" Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.060914 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:06 crc kubenswrapper[4698]: E0216 00:09:06.060884 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:06.560862844 +0000 UTC m=+156.218761606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.061252 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:06 crc kubenswrapper[4698]: E0216 00:09:06.061554 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:06.561545366 +0000 UTC m=+156.219444118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.128610 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bwn9c"] Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.163364 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:06 crc kubenswrapper[4698]: E0216 00:09:06.163635 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:06.663583496 +0000 UTC m=+156.321482258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.163750 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:06 crc kubenswrapper[4698]: E0216 00:09:06.164166 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:06.664157463 +0000 UTC m=+156.322056215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.264700 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:06 crc kubenswrapper[4698]: E0216 00:09:06.264890 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:06.764865012 +0000 UTC m=+156.422763774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.265066 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:06 crc kubenswrapper[4698]: E0216 00:09:06.265434 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:06.765415477 +0000 UTC m=+156.423314239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.367375 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:06 crc kubenswrapper[4698]: E0216 00:09:06.368305 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:06.868282946 +0000 UTC m=+156.526181698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.416867 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cm6sf" event={"ID":"65952246-da5c-4f4c-bb5a-a0b236b3675f","Type":"ContainerStarted","Data":"8a3c3f2b1750a494c7ae109f55b85d6af42513de0a3a3067b0a806fc27c19174"} Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.417196 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cm6sf" event={"ID":"65952246-da5c-4f4c-bb5a-a0b236b3675f","Type":"ContainerStarted","Data":"6f96d656b9665a3e9a94ddc632b3580395ca14974e355d8cd21138b1293a7dfb"} Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.419319 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwn9c" event={"ID":"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c","Type":"ContainerStarted","Data":"95e53ced78ffc4fc72d8abfbe9542921536e659be7df7c3d05145f1152f529fb"} Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.419353 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwn9c" event={"ID":"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c","Type":"ContainerStarted","Data":"d5a7351cbc74f725b41029b83aa00dc53d976b827913614ce1ea41ee138b166c"} Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.423071 4698 generic.go:334] "Generic (PLEG): container finished" podID="fb6b8789-217e-460f-95bb-e32613af20ca" containerID="a3abc182bba26ec3da9e4256bb85897c8b1ae5be97cb45c3bf0f04a5e135dffd" exitCode=0 Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.424648 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fb6b8789-217e-460f-95bb-e32613af20ca","Type":"ContainerDied","Data":"a3abc182bba26ec3da9e4256bb85897c8b1ae5be97cb45c3bf0f04a5e135dffd"} Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.431120 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" event={"ID":"291f913a-6566-409f-8663-e2f695edf9a6","Type":"ContainerStarted","Data":"4c0bf2b306e5a8fae1adf601a5e0712d2b830ab339f4d7b98d0ea0e0c828ab2f"} Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.444876 4698 generic.go:334] "Generic (PLEG): container finished" podID="6bb57b35-984c-43d1-8c3e-d3311bb457f4" containerID="615475c5bf369669b9108711e08cd1efacc9d25a20095ed90e640d42d5974ede" exitCode=0 Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.444955 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqp8r" event={"ID":"6bb57b35-984c-43d1-8c3e-d3311bb457f4","Type":"ContainerDied","Data":"615475c5bf369669b9108711e08cd1efacc9d25a20095ed90e640d42d5974ede"} Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.449244 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e6eac5bf-5ba6-4358-a2c3-f8697c73b765","Type":"ContainerStarted","Data":"a582f72bfe1b89b4ee9ad3fac4fb71683bd2144b280b611fe0569706cfef681d"} Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.471169 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:06 crc kubenswrapper[4698]: E0216 00:09:06.471677 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:06.971658679 +0000 UTC m=+156.629557441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.573521 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:06 crc kubenswrapper[4698]: E0216 00:09:06.573981 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:07.073944761 +0000 UTC m=+156.731843523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.574311 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:06 crc kubenswrapper[4698]: E0216 00:09:06.575133 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:07.075110635 +0000 UTC m=+156.733009397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.675875 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:06 crc kubenswrapper[4698]: E0216 00:09:06.677133 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:07.177108114 +0000 UTC m=+156.835006876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.691250 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dclvg"] Feb 16 00:09:06 crc kubenswrapper[4698]: W0216 00:09:06.714820 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9888dd66_5ef5_499a_92a8_c9fd32335a20.slice/crio-b2156e14dff14308a7a3f0405323fe70039d22deb6351935091edfc13c74e19e WatchSource:0}: Error finding container b2156e14dff14308a7a3f0405323fe70039d22deb6351935091edfc13c74e19e: Status 404 returned error can't find the container with id b2156e14dff14308a7a3f0405323fe70039d22deb6351935091edfc13c74e19e Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.739761 4698 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.778630 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:06 crc kubenswrapper[4698]: E0216 00:09:06.779337 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:07.279312312 +0000 UTC m=+156.937211074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.856417 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:09:06 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:09:06 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:09:06 crc kubenswrapper[4698]: healthz check failed Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.856522 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.884103 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:06 crc kubenswrapper[4698]: E0216 00:09:06.884414 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:07.384324401 +0000 UTC m=+157.042223163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.884586 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:06 crc kubenswrapper[4698]: E0216 00:09:06.885424 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:07.38539315 +0000 UTC m=+157.043292072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:06 crc kubenswrapper[4698]: I0216 00:09:06.985661 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:06 crc kubenswrapper[4698]: E0216 00:09:06.986367 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:07.486324569 +0000 UTC m=+157.144223331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.088289 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:07 crc kubenswrapper[4698]: E0216 00:09:07.088795 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:07.58877867 +0000 UTC m=+157.246677432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:07 crc kubenswrapper[4698]: E0216 00:09:07.191846 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:07.691816466 +0000 UTC m=+157.349715228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.192259 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.194238 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:07 crc kubenswrapper[4698]: E0216 00:09:07.194870 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:07.694853628 +0000 UTC m=+157.352752390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.295942 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:07 crc kubenswrapper[4698]: E0216 00:09:07.296427 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:07.796407796 +0000 UTC m=+157.454306558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.397772 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:07 crc kubenswrapper[4698]: E0216 00:09:07.398288 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:09:07.898269168 +0000 UTC m=+157.556167930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hbrff" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.474796 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" event={"ID":"291f913a-6566-409f-8663-e2f695edf9a6","Type":"ContainerStarted","Data":"dbd0d8a8eca2bc436ad1e1d4601f100aea485d48dbe0751fffb7158c1cc428af"} Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.474856 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" event={"ID":"291f913a-6566-409f-8663-e2f695edf9a6","Type":"ContainerStarted","Data":"3f1fac9073e2aff8dc828ebdc8a5cd536cce4c5279b4fb1a8b67a44d3d46d988"} Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.482538 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e6eac5bf-5ba6-4358-a2c3-f8697c73b765","Type":"ContainerStarted","Data":"7a6e915f71c722ff14de18645463371d6c07471cb9cf9238193fcea78892d5f9"} Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.502434 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:07 crc kubenswrapper[4698]: E0216 00:09:07.502946 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 00:09:08.00292244 +0000 UTC m=+157.660821202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.502975 4698 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-16T00:09:06.739800582Z","Handler":null,"Name":""} Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.503909 4698 generic.go:334] "Generic (PLEG): container finished" podID="65952246-da5c-4f4c-bb5a-a0b236b3675f" containerID="8a3c3f2b1750a494c7ae109f55b85d6af42513de0a3a3067b0a806fc27c19174" exitCode=0 Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.504135 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cm6sf" event={"ID":"65952246-da5c-4f4c-bb5a-a0b236b3675f","Type":"ContainerDied","Data":"8a3c3f2b1750a494c7ae109f55b85d6af42513de0a3a3067b0a806fc27c19174"} Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.507253 4698 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.507289 4698 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.507894 4698 generic.go:334] "Generic (PLEG): container finished" podID="194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c" containerID="95e53ced78ffc4fc72d8abfbe9542921536e659be7df7c3d05145f1152f529fb" exitCode=0 Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.507947 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwn9c" event={"ID":"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c","Type":"ContainerDied","Data":"95e53ced78ffc4fc72d8abfbe9542921536e659be7df7c3d05145f1152f529fb"} Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.513877 4698 generic.go:334] "Generic (PLEG): container finished" podID="9888dd66-5ef5-499a-92a8-c9fd32335a20" containerID="183173e6d34a45c3a2fc8cde6a5e3c255c736a4595aa47eeb5b6ed633c95062a" exitCode=0 Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.515568 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dclvg" event={"ID":"9888dd66-5ef5-499a-92a8-c9fd32335a20","Type":"ContainerDied","Data":"183173e6d34a45c3a2fc8cde6a5e3c255c736a4595aa47eeb5b6ed633c95062a"} Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.515651 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dclvg" event={"ID":"9888dd66-5ef5-499a-92a8-c9fd32335a20","Type":"ContainerStarted","Data":"b2156e14dff14308a7a3f0405323fe70039d22deb6351935091edfc13c74e19e"} Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.521886 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-rcxmx" podStartSLOduration=16.521866052 podStartE2EDuration="16.521866052s" podCreationTimestamp="2026-02-16 00:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:09:07.496420107 +0000 UTC m=+157.154318869" watchObservedRunningTime="2026-02-16 00:09:07.521866052 +0000 UTC m=+157.179764814" Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.533337 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.533317555 podStartE2EDuration="2.533317555s" podCreationTimestamp="2026-02-16 00:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:09:07.520563632 +0000 UTC m=+157.178462394" watchObservedRunningTime="2026-02-16 00:09:07.533317555 +0000 UTC m=+157.191216317" Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.605530 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.612733 4698 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.612774 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.646938 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hbrff\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.707563 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.734035 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.796963 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.846783 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:09:07 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:09:07 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:09:07 crc kubenswrapper[4698]: healthz check failed Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.846858 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:07 crc kubenswrapper[4698]: I0216 00:09:07.861809 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 00:09:08 crc kubenswrapper[4698]: I0216 00:09:08.013723 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb6b8789-217e-460f-95bb-e32613af20ca-kubelet-dir\") pod \"fb6b8789-217e-460f-95bb-e32613af20ca\" (UID: \"fb6b8789-217e-460f-95bb-e32613af20ca\") " Feb 16 00:09:08 crc kubenswrapper[4698]: I0216 00:09:08.014419 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb6b8789-217e-460f-95bb-e32613af20ca-kube-api-access\") pod \"fb6b8789-217e-460f-95bb-e32613af20ca\" (UID: \"fb6b8789-217e-460f-95bb-e32613af20ca\") " Feb 16 00:09:08 crc kubenswrapper[4698]: I0216 00:09:08.014345 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb6b8789-217e-460f-95bb-e32613af20ca-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fb6b8789-217e-460f-95bb-e32613af20ca" (UID: "fb6b8789-217e-460f-95bb-e32613af20ca"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:09:08 crc kubenswrapper[4698]: I0216 00:09:08.024449 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb6b8789-217e-460f-95bb-e32613af20ca-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fb6b8789-217e-460f-95bb-e32613af20ca" (UID: "fb6b8789-217e-460f-95bb-e32613af20ca"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:09:08 crc kubenswrapper[4698]: I0216 00:09:08.116075 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb6b8789-217e-460f-95bb-e32613af20ca-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:08 crc kubenswrapper[4698]: I0216 00:09:08.116111 4698 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb6b8789-217e-460f-95bb-e32613af20ca-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:08 crc kubenswrapper[4698]: I0216 00:09:08.350988 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hbrff"] Feb 16 00:09:08 crc kubenswrapper[4698]: W0216 00:09:08.379861 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e628ec8_31d7_43de_9c56_58f049dd8935.slice/crio-1882cc046b514fdf1d66a6d8c46f5d5ee098f06de2de6aa7ac73ffb7754107b3 WatchSource:0}: Error finding container 1882cc046b514fdf1d66a6d8c46f5d5ee098f06de2de6aa7ac73ffb7754107b3: Status 404 returned error can't find the container with id 1882cc046b514fdf1d66a6d8c46f5d5ee098f06de2de6aa7ac73ffb7754107b3 Feb 16 00:09:08 crc kubenswrapper[4698]: I0216 00:09:08.530832 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 00:09:08 crc kubenswrapper[4698]: I0216 00:09:08.530825 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fb6b8789-217e-460f-95bb-e32613af20ca","Type":"ContainerDied","Data":"f3d3b15f80a890f75c2883e8ca8f96fc04ad812fb5a403da8f14708df649bddb"} Feb 16 00:09:08 crc kubenswrapper[4698]: I0216 00:09:08.530916 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3d3b15f80a890f75c2883e8ca8f96fc04ad812fb5a403da8f14708df649bddb" Feb 16 00:09:08 crc kubenswrapper[4698]: I0216 00:09:08.532885 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" event={"ID":"9e628ec8-31d7-43de-9c56-58f049dd8935","Type":"ContainerStarted","Data":"1882cc046b514fdf1d66a6d8c46f5d5ee098f06de2de6aa7ac73ffb7754107b3"} Feb 16 00:09:08 crc kubenswrapper[4698]: I0216 00:09:08.535606 4698 generic.go:334] "Generic (PLEG): container finished" podID="e6eac5bf-5ba6-4358-a2c3-f8697c73b765" containerID="7a6e915f71c722ff14de18645463371d6c07471cb9cf9238193fcea78892d5f9" exitCode=0 Feb 16 00:09:08 crc kubenswrapper[4698]: I0216 00:09:08.536602 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e6eac5bf-5ba6-4358-a2c3-f8697c73b765","Type":"ContainerDied","Data":"7a6e915f71c722ff14de18645463371d6c07471cb9cf9238193fcea78892d5f9"} Feb 16 00:09:08 crc kubenswrapper[4698]: I0216 00:09:08.768513 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:09:08 crc kubenswrapper[4698]: I0216 00:09:08.774849 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-kkhhq" Feb 16 00:09:08 crc kubenswrapper[4698]: I0216 00:09:08.846162 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:09:08 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:09:08 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:09:08 crc kubenswrapper[4698]: healthz check failed Feb 16 00:09:08 crc kubenswrapper[4698]: I0216 00:09:08.846245 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:09 crc kubenswrapper[4698]: I0216 00:09:09.244648 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 16 00:09:09 crc kubenswrapper[4698]: I0216 00:09:09.555020 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" event={"ID":"9e628ec8-31d7-43de-9c56-58f049dd8935","Type":"ContainerStarted","Data":"90985e89ee3f6346256ff92c05e1e220f34ee37aab15886e16bd0a05fb9571fa"} Feb 16 00:09:09 crc kubenswrapper[4698]: I0216 00:09:09.581567 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" podStartSLOduration=132.58154354 podStartE2EDuration="2m12.58154354s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:09:09.577426419 +0000 UTC m=+159.235325191" watchObservedRunningTime="2026-02-16 00:09:09.58154354 +0000 UTC m=+159.239442302" Feb 16 00:09:09 crc kubenswrapper[4698]: I0216 00:09:09.589098 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-ngf8q" Feb 16 00:09:09 crc kubenswrapper[4698]: I0216 00:09:09.842301 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:09:09 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:09:09 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:09:09 crc kubenswrapper[4698]: healthz check failed Feb 16 00:09:09 crc kubenswrapper[4698]: I0216 00:09:09.842844 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:09 crc kubenswrapper[4698]: I0216 00:09:09.907359 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 00:09:10 crc kubenswrapper[4698]: I0216 00:09:10.060914 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6eac5bf-5ba6-4358-a2c3-f8697c73b765-kubelet-dir\") pod \"e6eac5bf-5ba6-4358-a2c3-f8697c73b765\" (UID: \"e6eac5bf-5ba6-4358-a2c3-f8697c73b765\") " Feb 16 00:09:10 crc kubenswrapper[4698]: I0216 00:09:10.061055 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6eac5bf-5ba6-4358-a2c3-f8697c73b765-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e6eac5bf-5ba6-4358-a2c3-f8697c73b765" (UID: "e6eac5bf-5ba6-4358-a2c3-f8697c73b765"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:09:10 crc kubenswrapper[4698]: I0216 00:09:10.061161 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6eac5bf-5ba6-4358-a2c3-f8697c73b765-kube-api-access\") pod \"e6eac5bf-5ba6-4358-a2c3-f8697c73b765\" (UID: \"e6eac5bf-5ba6-4358-a2c3-f8697c73b765\") " Feb 16 00:09:10 crc kubenswrapper[4698]: I0216 00:09:10.061580 4698 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6eac5bf-5ba6-4358-a2c3-f8697c73b765-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:10 crc kubenswrapper[4698]: I0216 00:09:10.069932 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6eac5bf-5ba6-4358-a2c3-f8697c73b765-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e6eac5bf-5ba6-4358-a2c3-f8697c73b765" (UID: "e6eac5bf-5ba6-4358-a2c3-f8697c73b765"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:09:10 crc kubenswrapper[4698]: I0216 00:09:10.163058 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6eac5bf-5ba6-4358-a2c3-f8697c73b765-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:10 crc kubenswrapper[4698]: I0216 00:09:10.565849 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e6eac5bf-5ba6-4358-a2c3-f8697c73b765","Type":"ContainerDied","Data":"a582f72bfe1b89b4ee9ad3fac4fb71683bd2144b280b611fe0569706cfef681d"} Feb 16 00:09:10 crc kubenswrapper[4698]: I0216 00:09:10.566477 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a582f72bfe1b89b4ee9ad3fac4fb71683bd2144b280b611fe0569706cfef681d" Feb 16 00:09:10 crc kubenswrapper[4698]: I0216 00:09:10.566643 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:10 crc kubenswrapper[4698]: I0216 00:09:10.566657 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 00:09:10 crc kubenswrapper[4698]: I0216 00:09:10.842393 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:09:10 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:09:10 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:09:10 crc kubenswrapper[4698]: healthz check failed Feb 16 00:09:10 crc kubenswrapper[4698]: I0216 00:09:10.842466 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:11 crc kubenswrapper[4698]: I0216 00:09:11.842211 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:09:11 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:09:11 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:09:11 crc kubenswrapper[4698]: healthz check failed Feb 16 00:09:11 crc kubenswrapper[4698]: I0216 00:09:11.842283 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:12 crc kubenswrapper[4698]: I0216 00:09:12.842321 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:09:12 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:09:12 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:09:12 crc kubenswrapper[4698]: healthz check failed Feb 16 00:09:12 crc kubenswrapper[4698]: I0216 00:09:12.842962 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:13 crc kubenswrapper[4698]: I0216 00:09:13.658685 4698 patch_prober.go:28] interesting pod/console-f9d7485db-x6shs container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 16 00:09:13 crc kubenswrapper[4698]: I0216 00:09:13.658766 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x6shs" podUID="4678f0b3-74d6-4ea2-9294-6c9bc5e9de25" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 16 00:09:13 crc kubenswrapper[4698]: I0216 00:09:13.844496 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:09:13 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:09:13 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:09:13 crc kubenswrapper[4698]: healthz check failed Feb 16 00:09:13 crc kubenswrapper[4698]: I0216 00:09:13.844590 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:14 crc kubenswrapper[4698]: I0216 00:09:14.328418 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-4qvcf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 16 00:09:14 crc kubenswrapper[4698]: I0216 00:09:14.329032 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4qvcf" podUID="ec24b0ba-9563-4228-af90-7774e49f5505" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 16 00:09:14 crc kubenswrapper[4698]: I0216 00:09:14.328595 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-4qvcf container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 16 00:09:14 crc kubenswrapper[4698]: I0216 00:09:14.329545 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4qvcf" podUID="ec24b0ba-9563-4228-af90-7774e49f5505" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 16 00:09:14 crc kubenswrapper[4698]: I0216 00:09:14.842838 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:09:14 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:09:14 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:09:14 crc kubenswrapper[4698]: healthz check failed Feb 16 00:09:14 crc kubenswrapper[4698]: I0216 00:09:14.842938 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:15 crc kubenswrapper[4698]: I0216 00:09:15.842175 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:09:15 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:09:15 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:09:15 crc kubenswrapper[4698]: healthz check failed Feb 16 00:09:15 crc kubenswrapper[4698]: I0216 00:09:15.842674 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:16 crc kubenswrapper[4698]: I0216 00:09:16.842092 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:09:16 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:09:16 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:09:16 crc kubenswrapper[4698]: healthz check failed Feb 16 00:09:16 crc kubenswrapper[4698]: I0216 00:09:16.842329 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:17 crc kubenswrapper[4698]: I0216 00:09:17.842260 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:09:17 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Feb 16 00:09:17 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:09:17 crc kubenswrapper[4698]: healthz check failed Feb 16 00:09:17 crc kubenswrapper[4698]: I0216 00:09:17.842357 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:18 crc kubenswrapper[4698]: I0216 00:09:18.841510 4698 patch_prober.go:28] interesting pod/router-default-5444994796-2dhzm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:09:18 crc kubenswrapper[4698]: [+]has-synced ok Feb 16 00:09:18 crc kubenswrapper[4698]: [+]process-running ok Feb 16 00:09:18 crc kubenswrapper[4698]: healthz check failed Feb 16 00:09:18 crc kubenswrapper[4698]: I0216 00:09:18.841603 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2dhzm" podUID="b700b649-4899-457c-afe5-575cf4a8907e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:09:19 crc kubenswrapper[4698]: I0216 00:09:19.842097 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:09:19 crc kubenswrapper[4698]: I0216 00:09:19.846506 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-2dhzm" Feb 16 00:09:20 crc kubenswrapper[4698]: I0216 00:09:20.881363 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs\") pod \"network-metrics-daemon-fgr4f\" (UID: \"87629f1e-d9d5-4302-a92a-f9ac3bad1707\") " pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:09:20 crc kubenswrapper[4698]: I0216 00:09:20.890927 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87629f1e-d9d5-4302-a92a-f9ac3bad1707-metrics-certs\") pod \"network-metrics-daemon-fgr4f\" (UID: \"87629f1e-d9d5-4302-a92a-f9ac3bad1707\") " pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:09:21 crc kubenswrapper[4698]: I0216 00:09:21.159529 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgr4f" Feb 16 00:09:22 crc kubenswrapper[4698]: I0216 00:09:22.437428 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rn7cx"] Feb 16 00:09:22 crc kubenswrapper[4698]: I0216 00:09:22.439436 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" podUID="96145a82-f664-45ba-805c-3721f813c8a9" containerName="controller-manager" containerID="cri-o://25d4fb2305e64b85e157f46002c4fd01b56a7eb48ff51880f6db655ff7e6027e" gracePeriod=30 Feb 16 00:09:22 crc kubenswrapper[4698]: I0216 00:09:22.445665 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr"] Feb 16 00:09:22 crc kubenswrapper[4698]: I0216 00:09:22.445891 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" podUID="92d745b7-0280-480b-b052-c2fd5499c43e" containerName="route-controller-manager" containerID="cri-o://6dc5fadab5fbafba10a4ab05a8a4dc041df3879a7d29b05c67ca964d3a38f8bb" gracePeriod=30 Feb 16 00:09:22 crc kubenswrapper[4698]: I0216 00:09:22.725860 4698 generic.go:334] "Generic (PLEG): container finished" podID="92d745b7-0280-480b-b052-c2fd5499c43e" containerID="6dc5fadab5fbafba10a4ab05a8a4dc041df3879a7d29b05c67ca964d3a38f8bb" exitCode=0 Feb 16 00:09:22 crc kubenswrapper[4698]: I0216 00:09:22.725914 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" event={"ID":"92d745b7-0280-480b-b052-c2fd5499c43e","Type":"ContainerDied","Data":"6dc5fadab5fbafba10a4ab05a8a4dc041df3879a7d29b05c67ca964d3a38f8bb"} Feb 16 00:09:23 crc kubenswrapper[4698]: I0216 00:09:23.568792 4698 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-k8vxr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 16 00:09:23 crc kubenswrapper[4698]: I0216 00:09:23.568889 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" podUID="92d745b7-0280-480b-b052-c2fd5499c43e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 16 00:09:23 crc kubenswrapper[4698]: I0216 00:09:23.667006 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:09:23 crc kubenswrapper[4698]: I0216 00:09:23.673598 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-x6shs" Feb 16 00:09:23 crc kubenswrapper[4698]: I0216 00:09:23.737494 4698 generic.go:334] "Generic (PLEG): container finished" podID="96145a82-f664-45ba-805c-3721f813c8a9" containerID="25d4fb2305e64b85e157f46002c4fd01b56a7eb48ff51880f6db655ff7e6027e" exitCode=0 Feb 16 00:09:23 crc kubenswrapper[4698]: I0216 00:09:23.737658 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" event={"ID":"96145a82-f664-45ba-805c-3721f813c8a9","Type":"ContainerDied","Data":"25d4fb2305e64b85e157f46002c4fd01b56a7eb48ff51880f6db655ff7e6027e"} Feb 16 00:09:23 crc kubenswrapper[4698]: I0216 00:09:23.798464 4698 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rn7cx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 16 00:09:23 crc kubenswrapper[4698]: I0216 00:09:23.798573 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" podUID="96145a82-f664-45ba-805c-3721f813c8a9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 16 00:09:24 crc kubenswrapper[4698]: I0216 00:09:24.331219 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-4qvcf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 16 00:09:24 crc kubenswrapper[4698]: I0216 00:09:24.331306 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4qvcf" podUID="ec24b0ba-9563-4228-af90-7774e49f5505" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 16 00:09:24 crc kubenswrapper[4698]: I0216 00:09:24.331409 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-4qvcf container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 16 00:09:24 crc kubenswrapper[4698]: I0216 00:09:24.331504 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4qvcf" podUID="ec24b0ba-9563-4228-af90-7774e49f5505" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 16 00:09:24 crc kubenswrapper[4698]: I0216 00:09:24.331569 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-4qvcf" Feb 16 00:09:24 crc kubenswrapper[4698]: I0216 00:09:24.332644 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-4qvcf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 16 00:09:24 crc kubenswrapper[4698]: I0216 00:09:24.332693 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4qvcf" podUID="ec24b0ba-9563-4228-af90-7774e49f5505" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 16 00:09:24 crc kubenswrapper[4698]: I0216 00:09:24.332655 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"ac2684e00fdde09bd0242ce13382108463a035920a431aeccd84642780f860df"} pod="openshift-console/downloads-7954f5f757-4qvcf" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 16 00:09:24 crc kubenswrapper[4698]: I0216 00:09:24.332754 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-4qvcf" podUID="ec24b0ba-9563-4228-af90-7774e49f5505" containerName="download-server" containerID="cri-o://ac2684e00fdde09bd0242ce13382108463a035920a431aeccd84642780f860df" gracePeriod=2 Feb 16 00:09:25 crc kubenswrapper[4698]: I0216 00:09:25.751121 4698 generic.go:334] "Generic (PLEG): container finished" podID="ec24b0ba-9563-4228-af90-7774e49f5505" containerID="ac2684e00fdde09bd0242ce13382108463a035920a431aeccd84642780f860df" exitCode=0 Feb 16 00:09:25 crc kubenswrapper[4698]: I0216 00:09:25.751218 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4qvcf" event={"ID":"ec24b0ba-9563-4228-af90-7774e49f5505","Type":"ContainerDied","Data":"ac2684e00fdde09bd0242ce13382108463a035920a431aeccd84642780f860df"} Feb 16 00:09:27 crc kubenswrapper[4698]: I0216 00:09:27.046189 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:09:27 crc kubenswrapper[4698]: I0216 00:09:27.046272 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:09:27 crc kubenswrapper[4698]: I0216 00:09:27.803244 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:09:34 crc kubenswrapper[4698]: I0216 00:09:34.329195 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-4qvcf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 16 00:09:34 crc kubenswrapper[4698]: I0216 00:09:34.332090 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4qvcf" podUID="ec24b0ba-9563-4228-af90-7774e49f5505" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 16 00:09:34 crc kubenswrapper[4698]: I0216 00:09:34.534443 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cmfwl" Feb 16 00:09:34 crc kubenswrapper[4698]: I0216 00:09:34.568150 4698 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-k8vxr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 00:09:34 crc kubenswrapper[4698]: I0216 00:09:34.568247 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" podUID="92d745b7-0280-480b-b052-c2fd5499c43e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 00:09:34 crc kubenswrapper[4698]: I0216 00:09:34.797727 4698 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rn7cx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 00:09:34 crc kubenswrapper[4698]: I0216 00:09:34.797832 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" podUID="96145a82-f664-45ba-805c-3721f813c8a9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 00:09:34 crc kubenswrapper[4698]: I0216 00:09:34.813239 4698 generic.go:334] "Generic (PLEG): container finished" podID="a0c45070-058d-4223-a78e-11b1319eff38" containerID="63c1459bceb154cd5104d1a8d0e699c96e4ee0f5d29054ccc48c736801f521d2" exitCode=0 Feb 16 00:09:34 crc kubenswrapper[4698]: I0216 00:09:34.813294 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29520000-k87pz" event={"ID":"a0c45070-058d-4223-a78e-11b1319eff38","Type":"ContainerDied","Data":"63c1459bceb154cd5104d1a8d0e699c96e4ee0f5d29054ccc48c736801f521d2"} Feb 16 00:09:36 crc kubenswrapper[4698]: E0216 00:09:36.333234 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 00:09:36 crc kubenswrapper[4698]: E0216 00:09:36.333666 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mczph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4644p_openshift-marketplace(d741b08c-0e5a-40aa-ba0b-6f11743daa22): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 00:09:36 crc kubenswrapper[4698]: E0216 00:09:36.334968 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4644p" podUID="d741b08c-0e5a-40aa-ba0b-6f11743daa22" Feb 16 00:09:37 crc kubenswrapper[4698]: E0216 00:09:37.454718 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4644p" podUID="d741b08c-0e5a-40aa-ba0b-6f11743daa22" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.524601 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29520000-k87pz" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.530016 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" Feb 16 00:09:37 crc kubenswrapper[4698]: E0216 00:09:37.547669 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 00:09:37 crc kubenswrapper[4698]: E0216 00:09:37.547967 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pqhnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xmh8j_openshift-marketplace(5308c07c-9d3d-4ead-8c6e-19c51adf5228): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.548976 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:09:37 crc kubenswrapper[4698]: E0216 00:09:37.549121 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xmh8j" podUID="5308c07c-9d3d-4ead-8c6e-19c51adf5228" Feb 16 00:09:37 crc kubenswrapper[4698]: E0216 00:09:37.550417 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 16 00:09:37 crc kubenswrapper[4698]: E0216 00:09:37.550547 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kjhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-fxx2x_openshift-marketplace(a92430cf-e02d-41ee-862e-d785decce5ec): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 00:09:37 crc kubenswrapper[4698]: E0216 00:09:37.552016 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-fxx2x" podUID="a92430cf-e02d-41ee-862e-d785decce5ec" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.660570 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d745b7-0280-480b-b052-c2fd5499c43e-config\") pod \"92d745b7-0280-480b-b052-c2fd5499c43e\" (UID: \"92d745b7-0280-480b-b052-c2fd5499c43e\") " Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.660655 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/96145a82-f664-45ba-805c-3721f813c8a9-proxy-ca-bundles\") pod \"96145a82-f664-45ba-805c-3721f813c8a9\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.660697 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92d745b7-0280-480b-b052-c2fd5499c43e-serving-cert\") pod \"92d745b7-0280-480b-b052-c2fd5499c43e\" (UID: \"92d745b7-0280-480b-b052-c2fd5499c43e\") " Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.660802 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54sgt\" (UniqueName: \"kubernetes.io/projected/96145a82-f664-45ba-805c-3721f813c8a9-kube-api-access-54sgt\") pod \"96145a82-f664-45ba-805c-3721f813c8a9\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.660837 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96145a82-f664-45ba-805c-3721f813c8a9-config\") pod \"96145a82-f664-45ba-805c-3721f813c8a9\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.660877 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l57wn\" (UniqueName: \"kubernetes.io/projected/a0c45070-058d-4223-a78e-11b1319eff38-kube-api-access-l57wn\") pod \"a0c45070-058d-4223-a78e-11b1319eff38\" (UID: \"a0c45070-058d-4223-a78e-11b1319eff38\") " Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.660900 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96145a82-f664-45ba-805c-3721f813c8a9-serving-cert\") pod \"96145a82-f664-45ba-805c-3721f813c8a9\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.660942 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5psf7\" (UniqueName: \"kubernetes.io/projected/92d745b7-0280-480b-b052-c2fd5499c43e-kube-api-access-5psf7\") pod \"92d745b7-0280-480b-b052-c2fd5499c43e\" (UID: \"92d745b7-0280-480b-b052-c2fd5499c43e\") " Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.661045 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96145a82-f664-45ba-805c-3721f813c8a9-client-ca\") pod \"96145a82-f664-45ba-805c-3721f813c8a9\" (UID: \"96145a82-f664-45ba-805c-3721f813c8a9\") " Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.661083 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92d745b7-0280-480b-b052-c2fd5499c43e-client-ca\") pod \"92d745b7-0280-480b-b052-c2fd5499c43e\" (UID: \"92d745b7-0280-480b-b052-c2fd5499c43e\") " Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.661120 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a0c45070-058d-4223-a78e-11b1319eff38-serviceca\") pod \"a0c45070-058d-4223-a78e-11b1319eff38\" (UID: \"a0c45070-058d-4223-a78e-11b1319eff38\") " Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.662220 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92d745b7-0280-480b-b052-c2fd5499c43e-config" (OuterVolumeSpecName: "config") pod "92d745b7-0280-480b-b052-c2fd5499c43e" (UID: "92d745b7-0280-480b-b052-c2fd5499c43e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.663026 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0c45070-058d-4223-a78e-11b1319eff38-serviceca" (OuterVolumeSpecName: "serviceca") pod "a0c45070-058d-4223-a78e-11b1319eff38" (UID: "a0c45070-058d-4223-a78e-11b1319eff38"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.663110 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96145a82-f664-45ba-805c-3721f813c8a9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "96145a82-f664-45ba-805c-3721f813c8a9" (UID: "96145a82-f664-45ba-805c-3721f813c8a9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.663569 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96145a82-f664-45ba-805c-3721f813c8a9-client-ca" (OuterVolumeSpecName: "client-ca") pod "96145a82-f664-45ba-805c-3721f813c8a9" (UID: "96145a82-f664-45ba-805c-3721f813c8a9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.664530 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96145a82-f664-45ba-805c-3721f813c8a9-config" (OuterVolumeSpecName: "config") pod "96145a82-f664-45ba-805c-3721f813c8a9" (UID: "96145a82-f664-45ba-805c-3721f813c8a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.665146 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92d745b7-0280-480b-b052-c2fd5499c43e-client-ca" (OuterVolumeSpecName: "client-ca") pod "92d745b7-0280-480b-b052-c2fd5499c43e" (UID: "92d745b7-0280-480b-b052-c2fd5499c43e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.671000 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d745b7-0280-480b-b052-c2fd5499c43e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "92d745b7-0280-480b-b052-c2fd5499c43e" (UID: "92d745b7-0280-480b-b052-c2fd5499c43e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.672897 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92d745b7-0280-480b-b052-c2fd5499c43e-kube-api-access-5psf7" (OuterVolumeSpecName: "kube-api-access-5psf7") pod "92d745b7-0280-480b-b052-c2fd5499c43e" (UID: "92d745b7-0280-480b-b052-c2fd5499c43e"). InnerVolumeSpecName "kube-api-access-5psf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.672997 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96145a82-f664-45ba-805c-3721f813c8a9-kube-api-access-54sgt" (OuterVolumeSpecName: "kube-api-access-54sgt") pod "96145a82-f664-45ba-805c-3721f813c8a9" (UID: "96145a82-f664-45ba-805c-3721f813c8a9"). InnerVolumeSpecName "kube-api-access-54sgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.677642 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96145a82-f664-45ba-805c-3721f813c8a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "96145a82-f664-45ba-805c-3721f813c8a9" (UID: "96145a82-f664-45ba-805c-3721f813c8a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.677936 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0c45070-058d-4223-a78e-11b1319eff38-kube-api-access-l57wn" (OuterVolumeSpecName: "kube-api-access-l57wn") pod "a0c45070-058d-4223-a78e-11b1319eff38" (UID: "a0c45070-058d-4223-a78e-11b1319eff38"). InnerVolumeSpecName "kube-api-access-l57wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.763564 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5psf7\" (UniqueName: \"kubernetes.io/projected/92d745b7-0280-480b-b052-c2fd5499c43e-kube-api-access-5psf7\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.764357 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96145a82-f664-45ba-805c-3721f813c8a9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.764378 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92d745b7-0280-480b-b052-c2fd5499c43e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.764393 4698 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a0c45070-058d-4223-a78e-11b1319eff38-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.764406 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d745b7-0280-480b-b052-c2fd5499c43e-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.764418 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/96145a82-f664-45ba-805c-3721f813c8a9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.764428 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92d745b7-0280-480b-b052-c2fd5499c43e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.764439 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54sgt\" (UniqueName: \"kubernetes.io/projected/96145a82-f664-45ba-805c-3721f813c8a9-kube-api-access-54sgt\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.764453 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96145a82-f664-45ba-805c-3721f813c8a9-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.764487 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96145a82-f664-45ba-805c-3721f813c8a9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.764498 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l57wn\" (UniqueName: \"kubernetes.io/projected/a0c45070-058d-4223-a78e-11b1319eff38-kube-api-access-l57wn\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.831240 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29520000-k87pz" event={"ID":"a0c45070-058d-4223-a78e-11b1319eff38","Type":"ContainerDied","Data":"7675d8493545d0432b6904a9aaa36fe57d860f4daa713a7df2fa55be196ed986"} Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.831290 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7675d8493545d0432b6904a9aaa36fe57d860f4daa713a7df2fa55be196ed986" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.831856 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29520000-k87pz" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.833735 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.833742 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rn7cx" event={"ID":"96145a82-f664-45ba-805c-3721f813c8a9","Type":"ContainerDied","Data":"eb7f1490fb0b6707153dcb71a9a44ce1d154fb7c5b42dc4c8615051fe970dac2"} Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.833828 4698 scope.go:117] "RemoveContainer" containerID="25d4fb2305e64b85e157f46002c4fd01b56a7eb48ff51880f6db655ff7e6027e" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.837030 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" event={"ID":"92d745b7-0280-480b-b052-c2fd5499c43e","Type":"ContainerDied","Data":"060418a3b397f659319520cb0220b186322a5280359fc2199a39779c99311228"} Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.837206 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr" Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.923982 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rn7cx"] Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.933680 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rn7cx"] Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.938513 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr"] Feb 16 00:09:37 crc kubenswrapper[4698]: I0216 00:09:37.942037 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k8vxr"] Feb 16 00:09:39 crc kubenswrapper[4698]: I0216 00:09:39.239776 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92d745b7-0280-480b-b052-c2fd5499c43e" path="/var/lib/kubelet/pods/92d745b7-0280-480b-b052-c2fd5499c43e/volumes" Feb 16 00:09:39 crc kubenswrapper[4698]: I0216 00:09:39.240824 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96145a82-f664-45ba-805c-3721f813c8a9" path="/var/lib/kubelet/pods/96145a82-f664-45ba-805c-3721f813c8a9/volumes" Feb 16 00:09:39 crc kubenswrapper[4698]: I0216 00:09:39.575969 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.522931 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f"] Feb 16 00:09:40 crc kubenswrapper[4698]: E0216 00:09:40.523841 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c45070-058d-4223-a78e-11b1319eff38" containerName="image-pruner" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.523890 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c45070-058d-4223-a78e-11b1319eff38" containerName="image-pruner" Feb 16 00:09:40 crc kubenswrapper[4698]: E0216 00:09:40.523926 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96145a82-f664-45ba-805c-3721f813c8a9" containerName="controller-manager" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.523935 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="96145a82-f664-45ba-805c-3721f813c8a9" containerName="controller-manager" Feb 16 00:09:40 crc kubenswrapper[4698]: E0216 00:09:40.524318 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6eac5bf-5ba6-4358-a2c3-f8697c73b765" containerName="pruner" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.524335 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6eac5bf-5ba6-4358-a2c3-f8697c73b765" containerName="pruner" Feb 16 00:09:40 crc kubenswrapper[4698]: E0216 00:09:40.524346 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d745b7-0280-480b-b052-c2fd5499c43e" containerName="route-controller-manager" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.524352 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d745b7-0280-480b-b052-c2fd5499c43e" containerName="route-controller-manager" Feb 16 00:09:40 crc kubenswrapper[4698]: E0216 00:09:40.524417 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb6b8789-217e-460f-95bb-e32613af20ca" containerName="pruner" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.524428 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb6b8789-217e-460f-95bb-e32613af20ca" containerName="pruner" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.524545 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0c45070-058d-4223-a78e-11b1319eff38" containerName="image-pruner" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.524557 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d745b7-0280-480b-b052-c2fd5499c43e" containerName="route-controller-manager" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.524569 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="96145a82-f664-45ba-805c-3721f813c8a9" containerName="controller-manager" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.524577 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb6b8789-217e-460f-95bb-e32613af20ca" containerName="pruner" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.524585 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6eac5bf-5ba6-4358-a2c3-f8697c73b765" containerName="pruner" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.524971 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76bddb996f-j8v4d"] Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.525155 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.528676 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.528828 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.529231 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.529672 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.529797 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.529849 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.530135 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f"] Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.530162 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76bddb996f-j8v4d"] Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.530240 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.532472 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.532820 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.533253 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.534861 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.535248 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.535277 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.544101 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.614753 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5575daf-9101-4c3a-a0fd-fd8af3930380-serving-cert\") pod \"route-controller-manager-5c58478884-cm29f\" (UID: \"d5575daf-9101-4c3a-a0fd-fd8af3930380\") " pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.615893 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-serving-cert\") pod \"controller-manager-76bddb996f-j8v4d\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.616164 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5575daf-9101-4c3a-a0fd-fd8af3930380-client-ca\") pod \"route-controller-manager-5c58478884-cm29f\" (UID: \"d5575daf-9101-4c3a-a0fd-fd8af3930380\") " pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.616307 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-proxy-ca-bundles\") pod \"controller-manager-76bddb996f-j8v4d\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.616327 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7spnc\" (UniqueName: \"kubernetes.io/projected/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-kube-api-access-7spnc\") pod \"controller-manager-76bddb996f-j8v4d\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.616390 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-config\") pod \"controller-manager-76bddb996f-j8v4d\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.616587 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrcqt\" (UniqueName: \"kubernetes.io/projected/d5575daf-9101-4c3a-a0fd-fd8af3930380-kube-api-access-zrcqt\") pod \"route-controller-manager-5c58478884-cm29f\" (UID: \"d5575daf-9101-4c3a-a0fd-fd8af3930380\") " pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.616655 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5575daf-9101-4c3a-a0fd-fd8af3930380-config\") pod \"route-controller-manager-5c58478884-cm29f\" (UID: \"d5575daf-9101-4c3a-a0fd-fd8af3930380\") " pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.616683 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-client-ca\") pod \"controller-manager-76bddb996f-j8v4d\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.718674 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrcqt\" (UniqueName: \"kubernetes.io/projected/d5575daf-9101-4c3a-a0fd-fd8af3930380-kube-api-access-zrcqt\") pod \"route-controller-manager-5c58478884-cm29f\" (UID: \"d5575daf-9101-4c3a-a0fd-fd8af3930380\") " pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.718752 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5575daf-9101-4c3a-a0fd-fd8af3930380-config\") pod \"route-controller-manager-5c58478884-cm29f\" (UID: \"d5575daf-9101-4c3a-a0fd-fd8af3930380\") " pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.718789 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-client-ca\") pod \"controller-manager-76bddb996f-j8v4d\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.718837 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5575daf-9101-4c3a-a0fd-fd8af3930380-serving-cert\") pod \"route-controller-manager-5c58478884-cm29f\" (UID: \"d5575daf-9101-4c3a-a0fd-fd8af3930380\") " pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.718864 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-serving-cert\") pod \"controller-manager-76bddb996f-j8v4d\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.718910 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5575daf-9101-4c3a-a0fd-fd8af3930380-client-ca\") pod \"route-controller-manager-5c58478884-cm29f\" (UID: \"d5575daf-9101-4c3a-a0fd-fd8af3930380\") " pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.718946 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-proxy-ca-bundles\") pod \"controller-manager-76bddb996f-j8v4d\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.718971 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7spnc\" (UniqueName: \"kubernetes.io/projected/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-kube-api-access-7spnc\") pod \"controller-manager-76bddb996f-j8v4d\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.719003 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-config\") pod \"controller-manager-76bddb996f-j8v4d\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.720504 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5575daf-9101-4c3a-a0fd-fd8af3930380-config\") pod \"route-controller-manager-5c58478884-cm29f\" (UID: \"d5575daf-9101-4c3a-a0fd-fd8af3930380\") " pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.720606 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5575daf-9101-4c3a-a0fd-fd8af3930380-client-ca\") pod \"route-controller-manager-5c58478884-cm29f\" (UID: \"d5575daf-9101-4c3a-a0fd-fd8af3930380\") " pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.723428 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-config\") pod \"controller-manager-76bddb996f-j8v4d\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.723920 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-proxy-ca-bundles\") pod \"controller-manager-76bddb996f-j8v4d\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.724863 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-client-ca\") pod \"controller-manager-76bddb996f-j8v4d\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.727565 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-serving-cert\") pod \"controller-manager-76bddb996f-j8v4d\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.728803 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5575daf-9101-4c3a-a0fd-fd8af3930380-serving-cert\") pod \"route-controller-manager-5c58478884-cm29f\" (UID: \"d5575daf-9101-4c3a-a0fd-fd8af3930380\") " pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.737136 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7spnc\" (UniqueName: \"kubernetes.io/projected/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-kube-api-access-7spnc\") pod \"controller-manager-76bddb996f-j8v4d\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.741569 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrcqt\" (UniqueName: \"kubernetes.io/projected/d5575daf-9101-4c3a-a0fd-fd8af3930380-kube-api-access-zrcqt\") pod \"route-controller-manager-5c58478884-cm29f\" (UID: \"d5575daf-9101-4c3a-a0fd-fd8af3930380\") " pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.849768 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" Feb 16 00:09:40 crc kubenswrapper[4698]: I0216 00:09:40.861520 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:41 crc kubenswrapper[4698]: E0216 00:09:41.754436 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xmh8j" podUID="5308c07c-9d3d-4ead-8c6e-19c51adf5228" Feb 16 00:09:41 crc kubenswrapper[4698]: E0216 00:09:41.754930 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-fxx2x" podUID="a92430cf-e02d-41ee-862e-d785decce5ec" Feb 16 00:09:41 crc kubenswrapper[4698]: E0216 00:09:41.780901 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 16 00:09:41 crc kubenswrapper[4698]: E0216 00:09:41.781332 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghqnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bwn9c_openshift-marketplace(194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 00:09:41 crc kubenswrapper[4698]: E0216 00:09:41.782677 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-bwn9c" podUID="194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c" Feb 16 00:09:41 crc kubenswrapper[4698]: E0216 00:09:41.939801 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bwn9c" podUID="194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c" Feb 16 00:09:42 crc kubenswrapper[4698]: E0216 00:09:42.027555 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 16 00:09:42 crc kubenswrapper[4698]: E0216 00:09:42.027742 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpgz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dclvg_openshift-marketplace(9888dd66-5ef5-499a-92a8-c9fd32335a20): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 00:09:42 crc kubenswrapper[4698]: E0216 00:09:42.029309 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dclvg" podUID="9888dd66-5ef5-499a-92a8-c9fd32335a20" Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.030139 4698 scope.go:117] "RemoveContainer" containerID="6dc5fadab5fbafba10a4ab05a8a4dc041df3879a7d29b05c67ca964d3a38f8bb" Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.368045 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76bddb996f-j8v4d"] Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.447747 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76bddb996f-j8v4d"] Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.512923 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f"] Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.556698 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-fgr4f"] Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.570886 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f"] Feb 16 00:09:42 crc kubenswrapper[4698]: W0216 00:09:42.572877 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87629f1e_d9d5_4302_a92a_f9ac3bad1707.slice/crio-0b0e1eae0c91a8ac70128074858843134db1764717aa12bccd9dc21c62f14f9f WatchSource:0}: Error finding container 0b0e1eae0c91a8ac70128074858843134db1764717aa12bccd9dc21c62f14f9f: Status 404 returned error can't find the container with id 0b0e1eae0c91a8ac70128074858843134db1764717aa12bccd9dc21c62f14f9f Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.866580 4698 generic.go:334] "Generic (PLEG): container finished" podID="b2921317-af1c-4c00-b999-99897d66aaba" containerID="45305034106ee127a7f49af1e17dfc6014b0749348cbf6df010079b26e03815d" exitCode=0 Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.866759 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrkd2" event={"ID":"b2921317-af1c-4c00-b999-99897d66aaba","Type":"ContainerDied","Data":"45305034106ee127a7f49af1e17dfc6014b0749348cbf6df010079b26e03815d"} Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.874980 4698 generic.go:334] "Generic (PLEG): container finished" podID="65952246-da5c-4f4c-bb5a-a0b236b3675f" containerID="c48d9b41a3b1e2e2a5616a3d43631c2f5e7081ea64b8224fcd833775e80b2e54" exitCode=0 Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.875108 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cm6sf" event={"ID":"65952246-da5c-4f4c-bb5a-a0b236b3675f","Type":"ContainerDied","Data":"c48d9b41a3b1e2e2a5616a3d43631c2f5e7081ea64b8224fcd833775e80b2e54"} Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.878122 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" event={"ID":"87629f1e-d9d5-4302-a92a-f9ac3bad1707","Type":"ContainerStarted","Data":"0b0e1eae0c91a8ac70128074858843134db1764717aa12bccd9dc21c62f14f9f"} Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.883715 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" event={"ID":"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1","Type":"ContainerStarted","Data":"55e73d46d461bcf88aecd1ee1b7e16ba46d8989d7ee591f3e64f03e173086097"} Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.883752 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" event={"ID":"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1","Type":"ContainerStarted","Data":"719b510e714133ee6b1aed003db9111251fd5dbae1aba1bbdc972dd769a2a08d"} Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.884009 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.887007 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" podUID="d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1" containerName="controller-manager" containerID="cri-o://55e73d46d461bcf88aecd1ee1b7e16ba46d8989d7ee591f3e64f03e173086097" gracePeriod=30 Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.891177 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.901293 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4qvcf" event={"ID":"ec24b0ba-9563-4228-af90-7774e49f5505","Type":"ContainerStarted","Data":"73e34466cfeeb68c448e050b9f2118485bcef7fe667ebcd63a8445be170c35f1"} Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.901397 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4qvcf" Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.902393 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-4qvcf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.902473 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4qvcf" podUID="ec24b0ba-9563-4228-af90-7774e49f5505" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.905294 4698 generic.go:334] "Generic (PLEG): container finished" podID="6bb57b35-984c-43d1-8c3e-d3311bb457f4" containerID="f301a13b64cb02c50a29807c9edeb9485be2968a76bf691e34f7e96fa4e52fae" exitCode=0 Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.905375 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqp8r" event={"ID":"6bb57b35-984c-43d1-8c3e-d3311bb457f4","Type":"ContainerDied","Data":"f301a13b64cb02c50a29807c9edeb9485be2968a76bf691e34f7e96fa4e52fae"} Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.912746 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" event={"ID":"d5575daf-9101-4c3a-a0fd-fd8af3930380","Type":"ContainerStarted","Data":"021db32b8a27fa802226156239ceafaa7c7cd9f2bf1ad4a194414c8351d96f93"} Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.912846 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" event={"ID":"d5575daf-9101-4c3a-a0fd-fd8af3930380","Type":"ContainerStarted","Data":"d5a4e8a64418f2009dd67a3c565df10e172670cc1ddbf44954837473d60f9845"} Feb 16 00:09:42 crc kubenswrapper[4698]: E0216 00:09:42.921294 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dclvg" podUID="9888dd66-5ef5-499a-92a8-c9fd32335a20" Feb 16 00:09:42 crc kubenswrapper[4698]: I0216 00:09:42.970114 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" podStartSLOduration=20.97009169 podStartE2EDuration="20.97009169s" podCreationTimestamp="2026-02-16 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:09:42.947798603 +0000 UTC m=+192.605697385" watchObservedRunningTime="2026-02-16 00:09:42.97009169 +0000 UTC m=+192.627990452" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.287168 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.483186 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-proxy-ca-bundles\") pod \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.484081 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7spnc\" (UniqueName: \"kubernetes.io/projected/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-kube-api-access-7spnc\") pod \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.485507 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-serving-cert\") pod \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.485696 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-client-ca\") pod \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.485854 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-config\") pod \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\" (UID: \"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1\") " Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.485572 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1" (UID: "d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.486378 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.486606 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-client-ca" (OuterVolumeSpecName: "client-ca") pod "d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1" (UID: "d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.487099 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-config" (OuterVolumeSpecName: "config") pod "d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1" (UID: "d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.494368 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 00:09:43 crc kubenswrapper[4698]: E0216 00:09:43.494676 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1" containerName="controller-manager" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.494696 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1" containerName="controller-manager" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.494844 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1" containerName="controller-manager" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.495308 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.496757 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-kube-api-access-7spnc" (OuterVolumeSpecName: "kube-api-access-7spnc") pod "d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1" (UID: "d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1"). InnerVolumeSpecName "kube-api-access-7spnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.498291 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.498698 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.501628 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1" (UID: "d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.505887 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.524357 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58f57f5dbb-bkkps"] Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.525258 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.540754 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58f57f5dbb-bkkps"] Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.587664 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b00be0-37c7-41c1-b899-4a7a819fba96-serving-cert\") pod \"controller-manager-58f57f5dbb-bkkps\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.588039 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cf3ac82-0316-4337-8bac-ad07dcebba9a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9cf3ac82-0316-4337-8bac-ad07dcebba9a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.588076 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b00be0-37c7-41c1-b899-4a7a819fba96-client-ca\") pod \"controller-manager-58f57f5dbb-bkkps\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.588240 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cf3ac82-0316-4337-8bac-ad07dcebba9a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9cf3ac82-0316-4337-8bac-ad07dcebba9a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.588289 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55d88\" (UniqueName: \"kubernetes.io/projected/38b00be0-37c7-41c1-b899-4a7a819fba96-kube-api-access-55d88\") pod \"controller-manager-58f57f5dbb-bkkps\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.588308 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b00be0-37c7-41c1-b899-4a7a819fba96-config\") pod \"controller-manager-58f57f5dbb-bkkps\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.588509 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38b00be0-37c7-41c1-b899-4a7a819fba96-proxy-ca-bundles\") pod \"controller-manager-58f57f5dbb-bkkps\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.588657 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7spnc\" (UniqueName: \"kubernetes.io/projected/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-kube-api-access-7spnc\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.588673 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.588683 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.588691 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.690422 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b00be0-37c7-41c1-b899-4a7a819fba96-serving-cert\") pod \"controller-manager-58f57f5dbb-bkkps\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.690497 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cf3ac82-0316-4337-8bac-ad07dcebba9a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9cf3ac82-0316-4337-8bac-ad07dcebba9a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.690565 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b00be0-37c7-41c1-b899-4a7a819fba96-client-ca\") pod \"controller-manager-58f57f5dbb-bkkps\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.690636 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cf3ac82-0316-4337-8bac-ad07dcebba9a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9cf3ac82-0316-4337-8bac-ad07dcebba9a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.690670 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55d88\" (UniqueName: \"kubernetes.io/projected/38b00be0-37c7-41c1-b899-4a7a819fba96-kube-api-access-55d88\") pod \"controller-manager-58f57f5dbb-bkkps\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.690718 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b00be0-37c7-41c1-b899-4a7a819fba96-config\") pod \"controller-manager-58f57f5dbb-bkkps\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.690752 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38b00be0-37c7-41c1-b899-4a7a819fba96-proxy-ca-bundles\") pod \"controller-manager-58f57f5dbb-bkkps\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.691569 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cf3ac82-0316-4337-8bac-ad07dcebba9a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9cf3ac82-0316-4337-8bac-ad07dcebba9a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.692068 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b00be0-37c7-41c1-b899-4a7a819fba96-client-ca\") pod \"controller-manager-58f57f5dbb-bkkps\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.695046 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38b00be0-37c7-41c1-b899-4a7a819fba96-proxy-ca-bundles\") pod \"controller-manager-58f57f5dbb-bkkps\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.695750 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b00be0-37c7-41c1-b899-4a7a819fba96-config\") pod \"controller-manager-58f57f5dbb-bkkps\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.696160 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b00be0-37c7-41c1-b899-4a7a819fba96-serving-cert\") pod \"controller-manager-58f57f5dbb-bkkps\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.707465 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cf3ac82-0316-4337-8bac-ad07dcebba9a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9cf3ac82-0316-4337-8bac-ad07dcebba9a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.708307 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55d88\" (UniqueName: \"kubernetes.io/projected/38b00be0-37c7-41c1-b899-4a7a819fba96-kube-api-access-55d88\") pod \"controller-manager-58f57f5dbb-bkkps\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.851408 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.861781 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.926787 4698 generic.go:334] "Generic (PLEG): container finished" podID="d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1" containerID="55e73d46d461bcf88aecd1ee1b7e16ba46d8989d7ee591f3e64f03e173086097" exitCode=0 Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.926869 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" event={"ID":"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1","Type":"ContainerDied","Data":"55e73d46d461bcf88aecd1ee1b7e16ba46d8989d7ee591f3e64f03e173086097"} Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.926907 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" event={"ID":"d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1","Type":"ContainerDied","Data":"719b510e714133ee6b1aed003db9111251fd5dbae1aba1bbdc972dd769a2a08d"} Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.926932 4698 scope.go:117] "RemoveContainer" containerID="55e73d46d461bcf88aecd1ee1b7e16ba46d8989d7ee591f3e64f03e173086097" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.927104 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76bddb996f-j8v4d" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.938279 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" event={"ID":"87629f1e-d9d5-4302-a92a-f9ac3bad1707","Type":"ContainerStarted","Data":"91dcb02eafdc86faccfbaeec43895cd04771cb402e18234a7cab9bff138a5939"} Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.938354 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fgr4f" event={"ID":"87629f1e-d9d5-4302-a92a-f9ac3bad1707","Type":"ContainerStarted","Data":"c9475d4f54c781a9ee0edc5bed0ffcd008646efd15995132bf3070f6f4a2f924"} Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.938904 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" podUID="d5575daf-9101-4c3a-a0fd-fd8af3930380" containerName="route-controller-manager" containerID="cri-o://021db32b8a27fa802226156239ceafaa7c7cd9f2bf1ad4a194414c8351d96f93" gracePeriod=30 Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.939391 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-4qvcf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.939420 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4qvcf" podUID="ec24b0ba-9563-4228-af90-7774e49f5505" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 16 00:09:43 crc kubenswrapper[4698]: I0216 00:09:43.967397 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" podStartSLOduration=21.967377895 podStartE2EDuration="21.967377895s" podCreationTimestamp="2026-02-16 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:09:43.966834248 +0000 UTC m=+193.624733010" watchObservedRunningTime="2026-02-16 00:09:43.967377895 +0000 UTC m=+193.625276657" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.000027 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-fgr4f" podStartSLOduration=167.000004509 podStartE2EDuration="2m47.000004509s" podCreationTimestamp="2026-02-16 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:09:43.993671329 +0000 UTC m=+193.651570091" watchObservedRunningTime="2026-02-16 00:09:44.000004509 +0000 UTC m=+193.657903281" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.014271 4698 scope.go:117] "RemoveContainer" containerID="55e73d46d461bcf88aecd1ee1b7e16ba46d8989d7ee591f3e64f03e173086097" Feb 16 00:09:44 crc kubenswrapper[4698]: E0216 00:09:44.015541 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55e73d46d461bcf88aecd1ee1b7e16ba46d8989d7ee591f3e64f03e173086097\": container with ID starting with 55e73d46d461bcf88aecd1ee1b7e16ba46d8989d7ee591f3e64f03e173086097 not found: ID does not exist" containerID="55e73d46d461bcf88aecd1ee1b7e16ba46d8989d7ee591f3e64f03e173086097" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.015605 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e73d46d461bcf88aecd1ee1b7e16ba46d8989d7ee591f3e64f03e173086097"} err="failed to get container status \"55e73d46d461bcf88aecd1ee1b7e16ba46d8989d7ee591f3e64f03e173086097\": rpc error: code = NotFound desc = could not find container \"55e73d46d461bcf88aecd1ee1b7e16ba46d8989d7ee591f3e64f03e173086097\": container with ID starting with 55e73d46d461bcf88aecd1ee1b7e16ba46d8989d7ee591f3e64f03e173086097 not found: ID does not exist" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.017357 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76bddb996f-j8v4d"] Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.021041 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-76bddb996f-j8v4d"] Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.203961 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 00:09:44 crc kubenswrapper[4698]: W0216 00:09:44.216841 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9cf3ac82_0316_4337_8bac_ad07dcebba9a.slice/crio-253ed5cf7579842b052f5ca8e01a44e34116c1a169fe546d06eb6160bed983f8 WatchSource:0}: Error finding container 253ed5cf7579842b052f5ca8e01a44e34116c1a169fe546d06eb6160bed983f8: Status 404 returned error can't find the container with id 253ed5cf7579842b052f5ca8e01a44e34116c1a169fe546d06eb6160bed983f8 Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.332779 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-4qvcf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.332825 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-4qvcf container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.332867 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4qvcf" podUID="ec24b0ba-9563-4228-af90-7774e49f5505" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.332905 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4qvcf" podUID="ec24b0ba-9563-4228-af90-7774e49f5505" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.347366 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58f57f5dbb-bkkps"] Feb 16 00:09:44 crc kubenswrapper[4698]: W0216 00:09:44.358824 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38b00be0_37c7_41c1_b899_4a7a819fba96.slice/crio-701a708922ff52a661cfae119580c733ef5402c27612b5ac3f7e0b7cd3191ed7 WatchSource:0}: Error finding container 701a708922ff52a661cfae119580c733ef5402c27612b5ac3f7e0b7cd3191ed7: Status 404 returned error can't find the container with id 701a708922ff52a661cfae119580c733ef5402c27612b5ac3f7e0b7cd3191ed7 Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.396566 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.504425 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5575daf-9101-4c3a-a0fd-fd8af3930380-client-ca\") pod \"d5575daf-9101-4c3a-a0fd-fd8af3930380\" (UID: \"d5575daf-9101-4c3a-a0fd-fd8af3930380\") " Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.504505 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrcqt\" (UniqueName: \"kubernetes.io/projected/d5575daf-9101-4c3a-a0fd-fd8af3930380-kube-api-access-zrcqt\") pod \"d5575daf-9101-4c3a-a0fd-fd8af3930380\" (UID: \"d5575daf-9101-4c3a-a0fd-fd8af3930380\") " Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.504535 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5575daf-9101-4c3a-a0fd-fd8af3930380-serving-cert\") pod \"d5575daf-9101-4c3a-a0fd-fd8af3930380\" (UID: \"d5575daf-9101-4c3a-a0fd-fd8af3930380\") " Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.504559 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5575daf-9101-4c3a-a0fd-fd8af3930380-config\") pod \"d5575daf-9101-4c3a-a0fd-fd8af3930380\" (UID: \"d5575daf-9101-4c3a-a0fd-fd8af3930380\") " Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.505793 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5575daf-9101-4c3a-a0fd-fd8af3930380-client-ca" (OuterVolumeSpecName: "client-ca") pod "d5575daf-9101-4c3a-a0fd-fd8af3930380" (UID: "d5575daf-9101-4c3a-a0fd-fd8af3930380"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.505998 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5575daf-9101-4c3a-a0fd-fd8af3930380-config" (OuterVolumeSpecName: "config") pod "d5575daf-9101-4c3a-a0fd-fd8af3930380" (UID: "d5575daf-9101-4c3a-a0fd-fd8af3930380"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.512568 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5575daf-9101-4c3a-a0fd-fd8af3930380-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d5575daf-9101-4c3a-a0fd-fd8af3930380" (UID: "d5575daf-9101-4c3a-a0fd-fd8af3930380"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.513144 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5575daf-9101-4c3a-a0fd-fd8af3930380-kube-api-access-zrcqt" (OuterVolumeSpecName: "kube-api-access-zrcqt") pod "d5575daf-9101-4c3a-a0fd-fd8af3930380" (UID: "d5575daf-9101-4c3a-a0fd-fd8af3930380"). InnerVolumeSpecName "kube-api-access-zrcqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.521360 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj"] Feb 16 00:09:44 crc kubenswrapper[4698]: E0216 00:09:44.522410 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5575daf-9101-4c3a-a0fd-fd8af3930380" containerName="route-controller-manager" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.522439 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5575daf-9101-4c3a-a0fd-fd8af3930380" containerName="route-controller-manager" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.522583 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5575daf-9101-4c3a-a0fd-fd8af3930380" containerName="route-controller-manager" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.523640 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.543897 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj"] Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.610251 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d4lp\" (UniqueName: \"kubernetes.io/projected/5fbd7758-34a7-49b8-a669-53bf408520f3-kube-api-access-8d4lp\") pod \"route-controller-manager-787757c785-7bgpj\" (UID: \"5fbd7758-34a7-49b8-a669-53bf408520f3\") " pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.610337 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fbd7758-34a7-49b8-a669-53bf408520f3-client-ca\") pod \"route-controller-manager-787757c785-7bgpj\" (UID: \"5fbd7758-34a7-49b8-a669-53bf408520f3\") " pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.610392 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fbd7758-34a7-49b8-a669-53bf408520f3-serving-cert\") pod \"route-controller-manager-787757c785-7bgpj\" (UID: \"5fbd7758-34a7-49b8-a669-53bf408520f3\") " pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.610453 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fbd7758-34a7-49b8-a669-53bf408520f3-config\") pod \"route-controller-manager-787757c785-7bgpj\" (UID: \"5fbd7758-34a7-49b8-a669-53bf408520f3\") " pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.610525 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5575daf-9101-4c3a-a0fd-fd8af3930380-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.610538 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrcqt\" (UniqueName: \"kubernetes.io/projected/d5575daf-9101-4c3a-a0fd-fd8af3930380-kube-api-access-zrcqt\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.610549 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5575daf-9101-4c3a-a0fd-fd8af3930380-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.610564 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5575daf-9101-4c3a-a0fd-fd8af3930380-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.711897 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fbd7758-34a7-49b8-a669-53bf408520f3-serving-cert\") pod \"route-controller-manager-787757c785-7bgpj\" (UID: \"5fbd7758-34a7-49b8-a669-53bf408520f3\") " pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.711957 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fbd7758-34a7-49b8-a669-53bf408520f3-config\") pod \"route-controller-manager-787757c785-7bgpj\" (UID: \"5fbd7758-34a7-49b8-a669-53bf408520f3\") " pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.712000 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d4lp\" (UniqueName: \"kubernetes.io/projected/5fbd7758-34a7-49b8-a669-53bf408520f3-kube-api-access-8d4lp\") pod \"route-controller-manager-787757c785-7bgpj\" (UID: \"5fbd7758-34a7-49b8-a669-53bf408520f3\") " pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.712037 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fbd7758-34a7-49b8-a669-53bf408520f3-client-ca\") pod \"route-controller-manager-787757c785-7bgpj\" (UID: \"5fbd7758-34a7-49b8-a669-53bf408520f3\") " pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.713578 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fbd7758-34a7-49b8-a669-53bf408520f3-client-ca\") pod \"route-controller-manager-787757c785-7bgpj\" (UID: \"5fbd7758-34a7-49b8-a669-53bf408520f3\") " pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.716324 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fbd7758-34a7-49b8-a669-53bf408520f3-config\") pod \"route-controller-manager-787757c785-7bgpj\" (UID: \"5fbd7758-34a7-49b8-a669-53bf408520f3\") " pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.727698 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fbd7758-34a7-49b8-a669-53bf408520f3-serving-cert\") pod \"route-controller-manager-787757c785-7bgpj\" (UID: \"5fbd7758-34a7-49b8-a669-53bf408520f3\") " pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.733871 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d4lp\" (UniqueName: \"kubernetes.io/projected/5fbd7758-34a7-49b8-a669-53bf408520f3-kube-api-access-8d4lp\") pod \"route-controller-manager-787757c785-7bgpj\" (UID: \"5fbd7758-34a7-49b8-a669-53bf408520f3\") " pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.849300 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.953308 4698 generic.go:334] "Generic (PLEG): container finished" podID="d5575daf-9101-4c3a-a0fd-fd8af3930380" containerID="021db32b8a27fa802226156239ceafaa7c7cd9f2bf1ad4a194414c8351d96f93" exitCode=0 Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.953502 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.956452 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" event={"ID":"d5575daf-9101-4c3a-a0fd-fd8af3930380","Type":"ContainerDied","Data":"021db32b8a27fa802226156239ceafaa7c7cd9f2bf1ad4a194414c8351d96f93"} Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.956528 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f" event={"ID":"d5575daf-9101-4c3a-a0fd-fd8af3930380","Type":"ContainerDied","Data":"d5a4e8a64418f2009dd67a3c565df10e172670cc1ddbf44954837473d60f9845"} Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.956557 4698 scope.go:117] "RemoveContainer" containerID="021db32b8a27fa802226156239ceafaa7c7cd9f2bf1ad4a194414c8351d96f93" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.962590 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cm6sf" event={"ID":"65952246-da5c-4f4c-bb5a-a0b236b3675f","Type":"ContainerStarted","Data":"78de4e7a561b7a18196574288815538f9eb6d3a4b8f309be36dc0e083462e1aa"} Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.965465 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9cf3ac82-0316-4337-8bac-ad07dcebba9a","Type":"ContainerStarted","Data":"a541b7248088ac9ecd8f5a12332b7bce284c4e327622f3ac97df433a0d078603"} Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.965500 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9cf3ac82-0316-4337-8bac-ad07dcebba9a","Type":"ContainerStarted","Data":"253ed5cf7579842b052f5ca8e01a44e34116c1a169fe546d06eb6160bed983f8"} Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.975914 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" event={"ID":"38b00be0-37c7-41c1-b899-4a7a819fba96","Type":"ContainerStarted","Data":"7d5993bdf3cc0f1cb90732d2f4e1f9694736cfa2eb70901143024f940f8a4a72"} Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.976229 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.976241 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" event={"ID":"38b00be0-37c7-41c1-b899-4a7a819fba96","Type":"ContainerStarted","Data":"701a708922ff52a661cfae119580c733ef5402c27612b5ac3f7e0b7cd3191ed7"} Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.985204 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrkd2" event={"ID":"b2921317-af1c-4c00-b999-99897d66aaba","Type":"ContainerStarted","Data":"c860fe262c3be3ff0ca7f4cf045ff4e8316ef15655b3f38b24d34e11567ff95e"} Feb 16 00:09:44 crc kubenswrapper[4698]: I0216 00:09:44.991770 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqp8r" event={"ID":"6bb57b35-984c-43d1-8c3e-d3311bb457f4","Type":"ContainerStarted","Data":"0dc6058affc62865433b4330f35cb918817584b0e5012c302e6000fd40528add"} Feb 16 00:09:45 crc kubenswrapper[4698]: I0216 00:09:45.003209 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:09:45 crc kubenswrapper[4698]: I0216 00:09:45.004497 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cm6sf" podStartSLOduration=4.481670795 podStartE2EDuration="41.004488893s" podCreationTimestamp="2026-02-16 00:09:04 +0000 UTC" firstStartedPulling="2026-02-16 00:09:07.506860193 +0000 UTC m=+157.164758955" lastFinishedPulling="2026-02-16 00:09:44.029678291 +0000 UTC m=+193.687577053" observedRunningTime="2026-02-16 00:09:45.002702426 +0000 UTC m=+194.660601178" watchObservedRunningTime="2026-02-16 00:09:45.004488893 +0000 UTC m=+194.662387655" Feb 16 00:09:45 crc kubenswrapper[4698]: I0216 00:09:45.013177 4698 scope.go:117] "RemoveContainer" containerID="021db32b8a27fa802226156239ceafaa7c7cd9f2bf1ad4a194414c8351d96f93" Feb 16 00:09:45 crc kubenswrapper[4698]: E0216 00:09:45.017304 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"021db32b8a27fa802226156239ceafaa7c7cd9f2bf1ad4a194414c8351d96f93\": container with ID starting with 021db32b8a27fa802226156239ceafaa7c7cd9f2bf1ad4a194414c8351d96f93 not found: ID does not exist" containerID="021db32b8a27fa802226156239ceafaa7c7cd9f2bf1ad4a194414c8351d96f93" Feb 16 00:09:45 crc kubenswrapper[4698]: I0216 00:09:45.017358 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"021db32b8a27fa802226156239ceafaa7c7cd9f2bf1ad4a194414c8351d96f93"} err="failed to get container status \"021db32b8a27fa802226156239ceafaa7c7cd9f2bf1ad4a194414c8351d96f93\": rpc error: code = NotFound desc = could not find container \"021db32b8a27fa802226156239ceafaa7c7cd9f2bf1ad4a194414c8351d96f93\": container with ID starting with 021db32b8a27fa802226156239ceafaa7c7cd9f2bf1ad4a194414c8351d96f93 not found: ID does not exist" Feb 16 00:09:45 crc kubenswrapper[4698]: I0216 00:09:45.023984 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cm6sf" Feb 16 00:09:45 crc kubenswrapper[4698]: I0216 00:09:45.024072 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cm6sf" Feb 16 00:09:45 crc kubenswrapper[4698]: I0216 00:09:45.029265 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f"] Feb 16 00:09:45 crc kubenswrapper[4698]: I0216 00:09:45.036044 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c58478884-cm29f"] Feb 16 00:09:45 crc kubenswrapper[4698]: I0216 00:09:45.049953 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.049915243 podStartE2EDuration="2.049915243s" podCreationTimestamp="2026-02-16 00:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:09:45.045198053 +0000 UTC m=+194.703096815" watchObservedRunningTime="2026-02-16 00:09:45.049915243 +0000 UTC m=+194.707813995" Feb 16 00:09:45 crc kubenswrapper[4698]: I0216 00:09:45.077159 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gqp8r" podStartSLOduration=3.393051684 podStartE2EDuration="41.077124536s" podCreationTimestamp="2026-02-16 00:09:04 +0000 UTC" firstStartedPulling="2026-02-16 00:09:06.449858844 +0000 UTC m=+156.107757596" lastFinishedPulling="2026-02-16 00:09:44.133931686 +0000 UTC m=+193.791830448" observedRunningTime="2026-02-16 00:09:45.073146679 +0000 UTC m=+194.731045451" watchObservedRunningTime="2026-02-16 00:09:45.077124536 +0000 UTC m=+194.735023298" Feb 16 00:09:45 crc kubenswrapper[4698]: I0216 00:09:45.101614 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" podStartSLOduration=3.101591551 podStartE2EDuration="3.101591551s" podCreationTimestamp="2026-02-16 00:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:09:45.097744539 +0000 UTC m=+194.755643301" watchObservedRunningTime="2026-02-16 00:09:45.101591551 +0000 UTC m=+194.759490313" Feb 16 00:09:45 crc kubenswrapper[4698]: I0216 00:09:45.150238 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vrkd2" podStartSLOduration=2.902898305 podStartE2EDuration="43.150210123s" podCreationTimestamp="2026-02-16 00:09:02 +0000 UTC" firstStartedPulling="2026-02-16 00:09:04.287546287 +0000 UTC m=+153.945445049" lastFinishedPulling="2026-02-16 00:09:44.534858105 +0000 UTC m=+194.192756867" observedRunningTime="2026-02-16 00:09:45.127205203 +0000 UTC m=+194.785103965" watchObservedRunningTime="2026-02-16 00:09:45.150210123 +0000 UTC m=+194.808108885" Feb 16 00:09:45 crc kubenswrapper[4698]: I0216 00:09:45.239859 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1" path="/var/lib/kubelet/pods/d4ff4dea-ffcd-4d97-9661-b3cad03a3fc1/volumes" Feb 16 00:09:45 crc kubenswrapper[4698]: I0216 00:09:45.240412 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5575daf-9101-4c3a-a0fd-fd8af3930380" path="/var/lib/kubelet/pods/d5575daf-9101-4c3a-a0fd-fd8af3930380/volumes" Feb 16 00:09:45 crc kubenswrapper[4698]: I0216 00:09:45.396456 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj"] Feb 16 00:09:45 crc kubenswrapper[4698]: W0216 00:09:45.406713 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fbd7758_34a7_49b8_a669_53bf408520f3.slice/crio-d987443c34111faded13cf80714ddc43f771724359ed9438967a7c41ab560fae WatchSource:0}: Error finding container d987443c34111faded13cf80714ddc43f771724359ed9438967a7c41ab560fae: Status 404 returned error can't find the container with id d987443c34111faded13cf80714ddc43f771724359ed9438967a7c41ab560fae Feb 16 00:09:46 crc kubenswrapper[4698]: I0216 00:09:46.010158 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9cf3ac82-0316-4337-8bac-ad07dcebba9a","Type":"ContainerDied","Data":"a541b7248088ac9ecd8f5a12332b7bce284c4e327622f3ac97df433a0d078603"} Feb 16 00:09:46 crc kubenswrapper[4698]: I0216 00:09:46.010441 4698 generic.go:334] "Generic (PLEG): container finished" podID="9cf3ac82-0316-4337-8bac-ad07dcebba9a" containerID="a541b7248088ac9ecd8f5a12332b7bce284c4e327622f3ac97df433a0d078603" exitCode=0 Feb 16 00:09:46 crc kubenswrapper[4698]: I0216 00:09:46.015219 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" event={"ID":"5fbd7758-34a7-49b8-a669-53bf408520f3","Type":"ContainerStarted","Data":"f2edd329e56e85979d6514e67e0a3bb80fb00b05dc44a752907aae8109ea1397"} Feb 16 00:09:46 crc kubenswrapper[4698]: I0216 00:09:46.015297 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" event={"ID":"5fbd7758-34a7-49b8-a669-53bf408520f3","Type":"ContainerStarted","Data":"d987443c34111faded13cf80714ddc43f771724359ed9438967a7c41ab560fae"} Feb 16 00:09:46 crc kubenswrapper[4698]: I0216 00:09:46.015467 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" Feb 16 00:09:46 crc kubenswrapper[4698]: I0216 00:09:46.321117 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" Feb 16 00:09:46 crc kubenswrapper[4698]: I0216 00:09:46.357200 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" podStartSLOduration=4.357176724 podStartE2EDuration="4.357176724s" podCreationTimestamp="2026-02-16 00:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:09:46.065957662 +0000 UTC m=+195.723856424" watchObservedRunningTime="2026-02-16 00:09:46.357176724 +0000 UTC m=+196.015075486" Feb 16 00:09:46 crc kubenswrapper[4698]: I0216 00:09:46.698485 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-cm6sf" podUID="65952246-da5c-4f4c-bb5a-a0b236b3675f" containerName="registry-server" probeResult="failure" output=< Feb 16 00:09:46 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Feb 16 00:09:46 crc kubenswrapper[4698]: > Feb 16 00:09:47 crc kubenswrapper[4698]: I0216 00:09:47.346999 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 00:09:47 crc kubenswrapper[4698]: I0216 00:09:47.358596 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cf3ac82-0316-4337-8bac-ad07dcebba9a-kube-api-access\") pod \"9cf3ac82-0316-4337-8bac-ad07dcebba9a\" (UID: \"9cf3ac82-0316-4337-8bac-ad07dcebba9a\") " Feb 16 00:09:47 crc kubenswrapper[4698]: I0216 00:09:47.358706 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cf3ac82-0316-4337-8bac-ad07dcebba9a-kubelet-dir\") pod \"9cf3ac82-0316-4337-8bac-ad07dcebba9a\" (UID: \"9cf3ac82-0316-4337-8bac-ad07dcebba9a\") " Feb 16 00:09:47 crc kubenswrapper[4698]: I0216 00:09:47.359079 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cf3ac82-0316-4337-8bac-ad07dcebba9a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9cf3ac82-0316-4337-8bac-ad07dcebba9a" (UID: "9cf3ac82-0316-4337-8bac-ad07dcebba9a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:09:47 crc kubenswrapper[4698]: I0216 00:09:47.366739 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cf3ac82-0316-4337-8bac-ad07dcebba9a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9cf3ac82-0316-4337-8bac-ad07dcebba9a" (UID: "9cf3ac82-0316-4337-8bac-ad07dcebba9a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:09:47 crc kubenswrapper[4698]: I0216 00:09:47.460210 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cf3ac82-0316-4337-8bac-ad07dcebba9a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:47 crc kubenswrapper[4698]: I0216 00:09:47.460277 4698 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cf3ac82-0316-4337-8bac-ad07dcebba9a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:48 crc kubenswrapper[4698]: I0216 00:09:48.036322 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9cf3ac82-0316-4337-8bac-ad07dcebba9a","Type":"ContainerDied","Data":"253ed5cf7579842b052f5ca8e01a44e34116c1a169fe546d06eb6160bed983f8"} Feb 16 00:09:48 crc kubenswrapper[4698]: I0216 00:09:48.036945 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="253ed5cf7579842b052f5ca8e01a44e34116c1a169fe546d06eb6160bed983f8" Feb 16 00:09:48 crc kubenswrapper[4698]: I0216 00:09:48.036422 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 00:09:50 crc kubenswrapper[4698]: I0216 00:09:50.879831 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 00:09:50 crc kubenswrapper[4698]: E0216 00:09:50.880392 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf3ac82-0316-4337-8bac-ad07dcebba9a" containerName="pruner" Feb 16 00:09:50 crc kubenswrapper[4698]: I0216 00:09:50.880406 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf3ac82-0316-4337-8bac-ad07dcebba9a" containerName="pruner" Feb 16 00:09:50 crc kubenswrapper[4698]: I0216 00:09:50.880530 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf3ac82-0316-4337-8bac-ad07dcebba9a" containerName="pruner" Feb 16 00:09:50 crc kubenswrapper[4698]: I0216 00:09:50.880932 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 00:09:50 crc kubenswrapper[4698]: I0216 00:09:50.884053 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 00:09:50 crc kubenswrapper[4698]: I0216 00:09:50.886252 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 00:09:50 crc kubenswrapper[4698]: I0216 00:09:50.913593 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 00:09:50 crc kubenswrapper[4698]: I0216 00:09:50.937855 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1897092-951e-4baa-b926-243514d4e981-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b1897092-951e-4baa-b926-243514d4e981\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 00:09:50 crc kubenswrapper[4698]: I0216 00:09:50.937935 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1897092-951e-4baa-b926-243514d4e981-kube-api-access\") pod \"installer-9-crc\" (UID: \"b1897092-951e-4baa-b926-243514d4e981\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 00:09:50 crc kubenswrapper[4698]: I0216 00:09:50.938000 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1897092-951e-4baa-b926-243514d4e981-var-lock\") pod \"installer-9-crc\" (UID: \"b1897092-951e-4baa-b926-243514d4e981\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 00:09:51 crc kubenswrapper[4698]: I0216 00:09:51.039206 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1897092-951e-4baa-b926-243514d4e981-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b1897092-951e-4baa-b926-243514d4e981\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 00:09:51 crc kubenswrapper[4698]: I0216 00:09:51.039317 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1897092-951e-4baa-b926-243514d4e981-kube-api-access\") pod \"installer-9-crc\" (UID: \"b1897092-951e-4baa-b926-243514d4e981\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 00:09:51 crc kubenswrapper[4698]: I0216 00:09:51.039387 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1897092-951e-4baa-b926-243514d4e981-var-lock\") pod \"installer-9-crc\" (UID: \"b1897092-951e-4baa-b926-243514d4e981\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 00:09:51 crc kubenswrapper[4698]: I0216 00:09:51.039480 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1897092-951e-4baa-b926-243514d4e981-var-lock\") pod \"installer-9-crc\" (UID: \"b1897092-951e-4baa-b926-243514d4e981\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 00:09:51 crc kubenswrapper[4698]: I0216 00:09:51.039536 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1897092-951e-4baa-b926-243514d4e981-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b1897092-951e-4baa-b926-243514d4e981\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 00:09:51 crc kubenswrapper[4698]: I0216 00:09:51.061446 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1897092-951e-4baa-b926-243514d4e981-kube-api-access\") pod \"installer-9-crc\" (UID: \"b1897092-951e-4baa-b926-243514d4e981\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 00:09:51 crc kubenswrapper[4698]: I0216 00:09:51.217903 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 00:09:51 crc kubenswrapper[4698]: I0216 00:09:51.659057 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 00:09:52 crc kubenswrapper[4698]: I0216 00:09:52.062584 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b1897092-951e-4baa-b926-243514d4e981","Type":"ContainerStarted","Data":"e896bef42bd57e3791625e78f3feb7aaffbbd78c2a5def6eb44a6de2ecf5ba4a"} Feb 16 00:09:52 crc kubenswrapper[4698]: I0216 00:09:52.388605 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vrkd2" Feb 16 00:09:52 crc kubenswrapper[4698]: I0216 00:09:52.388730 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vrkd2" Feb 16 00:09:52 crc kubenswrapper[4698]: I0216 00:09:52.503102 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vrkd2" Feb 16 00:09:53 crc kubenswrapper[4698]: I0216 00:09:53.076454 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b1897092-951e-4baa-b926-243514d4e981","Type":"ContainerStarted","Data":"89dbe9da5fd74c999953912da4cfe24835f865d86db46bb7cba3e6c5da3dbf72"} Feb 16 00:09:53 crc kubenswrapper[4698]: I0216 00:09:53.123564 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.123540763 podStartE2EDuration="3.123540763s" podCreationTimestamp="2026-02-16 00:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:09:53.119597411 +0000 UTC m=+202.777496183" watchObservedRunningTime="2026-02-16 00:09:53.123540763 +0000 UTC m=+202.781439545" Feb 16 00:09:53 crc kubenswrapper[4698]: I0216 00:09:53.184308 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vrkd2" Feb 16 00:09:54 crc kubenswrapper[4698]: I0216 00:09:54.352437 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-4qvcf" Feb 16 00:09:54 crc kubenswrapper[4698]: I0216 00:09:54.624111 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gqp8r" Feb 16 00:09:54 crc kubenswrapper[4698]: I0216 00:09:54.624214 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gqp8r" Feb 16 00:09:54 crc kubenswrapper[4698]: I0216 00:09:54.690635 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gqp8r" Feb 16 00:09:55 crc kubenswrapper[4698]: I0216 00:09:55.076898 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cm6sf" Feb 16 00:09:55 crc kubenswrapper[4698]: I0216 00:09:55.127006 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cm6sf" Feb 16 00:09:55 crc kubenswrapper[4698]: I0216 00:09:55.130627 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gqp8r" Feb 16 00:09:56 crc kubenswrapper[4698]: I0216 00:09:56.560908 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cm6sf"] Feb 16 00:09:56 crc kubenswrapper[4698]: I0216 00:09:56.561382 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cm6sf" podUID="65952246-da5c-4f4c-bb5a-a0b236b3675f" containerName="registry-server" containerID="cri-o://78de4e7a561b7a18196574288815538f9eb6d3a4b8f309be36dc0e083462e1aa" gracePeriod=2 Feb 16 00:09:57 crc kubenswrapper[4698]: I0216 00:09:57.046527 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:09:57 crc kubenswrapper[4698]: I0216 00:09:57.046658 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:09:58 crc kubenswrapper[4698]: I0216 00:09:58.131352 4698 generic.go:334] "Generic (PLEG): container finished" podID="65952246-da5c-4f4c-bb5a-a0b236b3675f" containerID="78de4e7a561b7a18196574288815538f9eb6d3a4b8f309be36dc0e083462e1aa" exitCode=0 Feb 16 00:09:58 crc kubenswrapper[4698]: I0216 00:09:58.131474 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cm6sf" event={"ID":"65952246-da5c-4f4c-bb5a-a0b236b3675f","Type":"ContainerDied","Data":"78de4e7a561b7a18196574288815538f9eb6d3a4b8f309be36dc0e083462e1aa"} Feb 16 00:09:59 crc kubenswrapper[4698]: I0216 00:09:59.092599 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cm6sf" Feb 16 00:09:59 crc kubenswrapper[4698]: I0216 00:09:59.159506 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cm6sf" event={"ID":"65952246-da5c-4f4c-bb5a-a0b236b3675f","Type":"ContainerDied","Data":"6f96d656b9665a3e9a94ddc632b3580395ca14974e355d8cd21138b1293a7dfb"} Feb 16 00:09:59 crc kubenswrapper[4698]: I0216 00:09:59.159708 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cm6sf" Feb 16 00:09:59 crc kubenswrapper[4698]: I0216 00:09:59.159715 4698 scope.go:117] "RemoveContainer" containerID="78de4e7a561b7a18196574288815538f9eb6d3a4b8f309be36dc0e083462e1aa" Feb 16 00:09:59 crc kubenswrapper[4698]: I0216 00:09:59.189382 4698 scope.go:117] "RemoveContainer" containerID="c48d9b41a3b1e2e2a5616a3d43631c2f5e7081ea64b8224fcd833775e80b2e54" Feb 16 00:09:59 crc kubenswrapper[4698]: I0216 00:09:59.212123 4698 scope.go:117] "RemoveContainer" containerID="8a3c3f2b1750a494c7ae109f55b85d6af42513de0a3a3067b0a806fc27c19174" Feb 16 00:09:59 crc kubenswrapper[4698]: I0216 00:09:59.275247 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65952246-da5c-4f4c-bb5a-a0b236b3675f-catalog-content\") pod \"65952246-da5c-4f4c-bb5a-a0b236b3675f\" (UID: \"65952246-da5c-4f4c-bb5a-a0b236b3675f\") " Feb 16 00:09:59 crc kubenswrapper[4698]: I0216 00:09:59.275775 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlb7z\" (UniqueName: \"kubernetes.io/projected/65952246-da5c-4f4c-bb5a-a0b236b3675f-kube-api-access-dlb7z\") pod \"65952246-da5c-4f4c-bb5a-a0b236b3675f\" (UID: \"65952246-da5c-4f4c-bb5a-a0b236b3675f\") " Feb 16 00:09:59 crc kubenswrapper[4698]: I0216 00:09:59.275805 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65952246-da5c-4f4c-bb5a-a0b236b3675f-utilities\") pod \"65952246-da5c-4f4c-bb5a-a0b236b3675f\" (UID: \"65952246-da5c-4f4c-bb5a-a0b236b3675f\") " Feb 16 00:09:59 crc kubenswrapper[4698]: I0216 00:09:59.276786 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65952246-da5c-4f4c-bb5a-a0b236b3675f-utilities" (OuterVolumeSpecName: "utilities") pod "65952246-da5c-4f4c-bb5a-a0b236b3675f" (UID: "65952246-da5c-4f4c-bb5a-a0b236b3675f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:09:59 crc kubenswrapper[4698]: I0216 00:09:59.283123 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65952246-da5c-4f4c-bb5a-a0b236b3675f-kube-api-access-dlb7z" (OuterVolumeSpecName: "kube-api-access-dlb7z") pod "65952246-da5c-4f4c-bb5a-a0b236b3675f" (UID: "65952246-da5c-4f4c-bb5a-a0b236b3675f"). InnerVolumeSpecName "kube-api-access-dlb7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:09:59 crc kubenswrapper[4698]: I0216 00:09:59.293384 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65952246-da5c-4f4c-bb5a-a0b236b3675f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65952246-da5c-4f4c-bb5a-a0b236b3675f" (UID: "65952246-da5c-4f4c-bb5a-a0b236b3675f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:09:59 crc kubenswrapper[4698]: I0216 00:09:59.377178 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlb7z\" (UniqueName: \"kubernetes.io/projected/65952246-da5c-4f4c-bb5a-a0b236b3675f-kube-api-access-dlb7z\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:59 crc kubenswrapper[4698]: I0216 00:09:59.377226 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65952246-da5c-4f4c-bb5a-a0b236b3675f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:59 crc kubenswrapper[4698]: I0216 00:09:59.377240 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65952246-da5c-4f4c-bb5a-a0b236b3675f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:09:59 crc kubenswrapper[4698]: I0216 00:09:59.508011 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cm6sf"] Feb 16 00:09:59 crc kubenswrapper[4698]: I0216 00:09:59.515587 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cm6sf"] Feb 16 00:10:00 crc kubenswrapper[4698]: I0216 00:10:00.167911 4698 generic.go:334] "Generic (PLEG): container finished" podID="d741b08c-0e5a-40aa-ba0b-6f11743daa22" containerID="21912a707a1e7ec35c84eed5b1495b7517fdce6bf285a6fd66fb93bea3932e39" exitCode=0 Feb 16 00:10:00 crc kubenswrapper[4698]: I0216 00:10:00.168307 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4644p" event={"ID":"d741b08c-0e5a-40aa-ba0b-6f11743daa22","Type":"ContainerDied","Data":"21912a707a1e7ec35c84eed5b1495b7517fdce6bf285a6fd66fb93bea3932e39"} Feb 16 00:10:01 crc kubenswrapper[4698]: I0216 00:10:01.179360 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4644p" event={"ID":"d741b08c-0e5a-40aa-ba0b-6f11743daa22","Type":"ContainerStarted","Data":"12b8f04efd700727fe674c220ba29b28bb2a066b842cbf237e10d12d68ecb3db"} Feb 16 00:10:01 crc kubenswrapper[4698]: I0216 00:10:01.181230 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwn9c" event={"ID":"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c","Type":"ContainerStarted","Data":"07b4cd140b99831c83628322b31578af0b71a62549bfc2508bdb2175e1d8764a"} Feb 16 00:10:01 crc kubenswrapper[4698]: I0216 00:10:01.184080 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dclvg" event={"ID":"9888dd66-5ef5-499a-92a8-c9fd32335a20","Type":"ContainerStarted","Data":"5d4143de8b33d9628ff70545d6933825009c9851d891a8f1736e89ddc2be31a8"} Feb 16 00:10:01 crc kubenswrapper[4698]: I0216 00:10:01.186082 4698 generic.go:334] "Generic (PLEG): container finished" podID="a92430cf-e02d-41ee-862e-d785decce5ec" containerID="da5a74be616cc31af0dce5259798195a0549e46fc5196d4b607ec785dfefcb3a" exitCode=0 Feb 16 00:10:01 crc kubenswrapper[4698]: I0216 00:10:01.186137 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxx2x" event={"ID":"a92430cf-e02d-41ee-862e-d785decce5ec","Type":"ContainerDied","Data":"da5a74be616cc31af0dce5259798195a0549e46fc5196d4b607ec785dfefcb3a"} Feb 16 00:10:01 crc kubenswrapper[4698]: I0216 00:10:01.192969 4698 generic.go:334] "Generic (PLEG): container finished" podID="5308c07c-9d3d-4ead-8c6e-19c51adf5228" containerID="eaf24530e6f3bb8cd17e72d79f48ab88c935c404701c292b589fcc2fe9361bbc" exitCode=0 Feb 16 00:10:01 crc kubenswrapper[4698]: I0216 00:10:01.193004 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xmh8j" event={"ID":"5308c07c-9d3d-4ead-8c6e-19c51adf5228","Type":"ContainerDied","Data":"eaf24530e6f3bb8cd17e72d79f48ab88c935c404701c292b589fcc2fe9361bbc"} Feb 16 00:10:01 crc kubenswrapper[4698]: I0216 00:10:01.212886 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4644p" podStartSLOduration=2.7727554210000003 podStartE2EDuration="59.21286146s" podCreationTimestamp="2026-02-16 00:09:02 +0000 UTC" firstStartedPulling="2026-02-16 00:09:04.307024243 +0000 UTC m=+153.964923005" lastFinishedPulling="2026-02-16 00:10:00.747130282 +0000 UTC m=+210.405029044" observedRunningTime="2026-02-16 00:10:01.211523498 +0000 UTC m=+210.869422260" watchObservedRunningTime="2026-02-16 00:10:01.21286146 +0000 UTC m=+210.870760262" Feb 16 00:10:01 crc kubenswrapper[4698]: I0216 00:10:01.238335 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65952246-da5c-4f4c-bb5a-a0b236b3675f" path="/var/lib/kubelet/pods/65952246-da5c-4f4c-bb5a-a0b236b3675f/volumes" Feb 16 00:10:02 crc kubenswrapper[4698]: I0216 00:10:02.201095 4698 generic.go:334] "Generic (PLEG): container finished" podID="194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c" containerID="07b4cd140b99831c83628322b31578af0b71a62549bfc2508bdb2175e1d8764a" exitCode=0 Feb 16 00:10:02 crc kubenswrapper[4698]: I0216 00:10:02.201160 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwn9c" event={"ID":"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c","Type":"ContainerDied","Data":"07b4cd140b99831c83628322b31578af0b71a62549bfc2508bdb2175e1d8764a"} Feb 16 00:10:02 crc kubenswrapper[4698]: I0216 00:10:02.206094 4698 generic.go:334] "Generic (PLEG): container finished" podID="9888dd66-5ef5-499a-92a8-c9fd32335a20" containerID="5d4143de8b33d9628ff70545d6933825009c9851d891a8f1736e89ddc2be31a8" exitCode=0 Feb 16 00:10:02 crc kubenswrapper[4698]: I0216 00:10:02.206217 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dclvg" event={"ID":"9888dd66-5ef5-499a-92a8-c9fd32335a20","Type":"ContainerDied","Data":"5d4143de8b33d9628ff70545d6933825009c9851d891a8f1736e89ddc2be31a8"} Feb 16 00:10:02 crc kubenswrapper[4698]: I0216 00:10:02.515648 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ckzvr"] Feb 16 00:10:02 crc kubenswrapper[4698]: I0216 00:10:02.552792 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4644p" Feb 16 00:10:02 crc kubenswrapper[4698]: I0216 00:10:02.553523 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4644p" Feb 16 00:10:03 crc kubenswrapper[4698]: I0216 00:10:03.594181 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4644p" podUID="d741b08c-0e5a-40aa-ba0b-6f11743daa22" containerName="registry-server" probeResult="failure" output=< Feb 16 00:10:03 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Feb 16 00:10:03 crc kubenswrapper[4698]: > Feb 16 00:10:06 crc kubenswrapper[4698]: I0216 00:10:06.235329 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxx2x" event={"ID":"a92430cf-e02d-41ee-862e-d785decce5ec","Type":"ContainerStarted","Data":"0a32f0e7a3fc17bae7d66007a90110288c9cbc149b6b306348bc1ab43c7950ea"} Feb 16 00:10:09 crc kubenswrapper[4698]: I0216 00:10:09.259252 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xmh8j" event={"ID":"5308c07c-9d3d-4ead-8c6e-19c51adf5228","Type":"ContainerStarted","Data":"fa93237dc56d2ae572bfa909d591bc3226eaa86cfd3d7916d67a4909c790b9ee"} Feb 16 00:10:10 crc kubenswrapper[4698]: I0216 00:10:10.295513 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fxx2x" podStartSLOduration=9.184047688 podStartE2EDuration="1m8.295491154s" podCreationTimestamp="2026-02-16 00:09:02 +0000 UTC" firstStartedPulling="2026-02-16 00:09:05.376077704 +0000 UTC m=+155.033976466" lastFinishedPulling="2026-02-16 00:10:04.48752118 +0000 UTC m=+214.145419932" observedRunningTime="2026-02-16 00:10:07.263079612 +0000 UTC m=+216.920978374" watchObservedRunningTime="2026-02-16 00:10:10.295491154 +0000 UTC m=+219.953389936" Feb 16 00:10:10 crc kubenswrapper[4698]: I0216 00:10:10.297250 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xmh8j" podStartSLOduration=5.126480431 podStartE2EDuration="1m8.297242948s" podCreationTimestamp="2026-02-16 00:09:02 +0000 UTC" firstStartedPulling="2026-02-16 00:09:05.38544775 +0000 UTC m=+155.043346512" lastFinishedPulling="2026-02-16 00:10:08.556210277 +0000 UTC m=+218.214109029" observedRunningTime="2026-02-16 00:10:10.2927851 +0000 UTC m=+219.950683912" watchObservedRunningTime="2026-02-16 00:10:10.297242948 +0000 UTC m=+219.955141730" Feb 16 00:10:11 crc kubenswrapper[4698]: I0216 00:10:11.272825 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwn9c" event={"ID":"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c","Type":"ContainerStarted","Data":"24db8f4df12384613787b8650db29affa1ca6d704b141a4b34a86a2b4aa42aca"} Feb 16 00:10:12 crc kubenswrapper[4698]: I0216 00:10:12.282783 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dclvg" event={"ID":"9888dd66-5ef5-499a-92a8-c9fd32335a20","Type":"ContainerStarted","Data":"5166dbda72366b53f2691160c08594350673bf96e92d216ca5b2fd86b81ec6fc"} Feb 16 00:10:12 crc kubenswrapper[4698]: I0216 00:10:12.306331 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bwn9c" podStartSLOduration=4.372841002 podStartE2EDuration="1m7.306305474s" podCreationTimestamp="2026-02-16 00:09:05 +0000 UTC" firstStartedPulling="2026-02-16 00:09:07.511042388 +0000 UTC m=+157.168941150" lastFinishedPulling="2026-02-16 00:10:10.44450686 +0000 UTC m=+220.102405622" observedRunningTime="2026-02-16 00:10:12.300916127 +0000 UTC m=+221.958814909" watchObservedRunningTime="2026-02-16 00:10:12.306305474 +0000 UTC m=+221.964204246" Feb 16 00:10:12 crc kubenswrapper[4698]: I0216 00:10:12.328832 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dclvg" podStartSLOduration=3.083388651 podStartE2EDuration="1m7.32880961s" podCreationTimestamp="2026-02-16 00:09:05 +0000 UTC" firstStartedPulling="2026-02-16 00:09:07.530229881 +0000 UTC m=+157.188128643" lastFinishedPulling="2026-02-16 00:10:11.77565083 +0000 UTC m=+221.433549602" observedRunningTime="2026-02-16 00:10:12.323520596 +0000 UTC m=+221.981419368" watchObservedRunningTime="2026-02-16 00:10:12.32880961 +0000 UTC m=+221.986708382" Feb 16 00:10:12 crc kubenswrapper[4698]: I0216 00:10:12.605666 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4644p" Feb 16 00:10:12 crc kubenswrapper[4698]: I0216 00:10:12.645239 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4644p" Feb 16 00:10:12 crc kubenswrapper[4698]: I0216 00:10:12.783896 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fxx2x" Feb 16 00:10:12 crc kubenswrapper[4698]: I0216 00:10:12.783956 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fxx2x" Feb 16 00:10:12 crc kubenswrapper[4698]: I0216 00:10:12.824146 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fxx2x" Feb 16 00:10:12 crc kubenswrapper[4698]: I0216 00:10:12.979552 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xmh8j" Feb 16 00:10:12 crc kubenswrapper[4698]: I0216 00:10:12.979639 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xmh8j" Feb 16 00:10:13 crc kubenswrapper[4698]: I0216 00:10:13.019585 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xmh8j" Feb 16 00:10:13 crc kubenswrapper[4698]: I0216 00:10:13.328063 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fxx2x" Feb 16 00:10:15 crc kubenswrapper[4698]: I0216 00:10:15.154262 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fxx2x"] Feb 16 00:10:15 crc kubenswrapper[4698]: I0216 00:10:15.298972 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fxx2x" podUID="a92430cf-e02d-41ee-862e-d785decce5ec" containerName="registry-server" containerID="cri-o://0a32f0e7a3fc17bae7d66007a90110288c9cbc149b6b306348bc1ab43c7950ea" gracePeriod=2 Feb 16 00:10:15 crc kubenswrapper[4698]: I0216 00:10:15.663943 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bwn9c" Feb 16 00:10:15 crc kubenswrapper[4698]: I0216 00:10:15.664326 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bwn9c" Feb 16 00:10:15 crc kubenswrapper[4698]: I0216 00:10:15.810059 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxx2x" Feb 16 00:10:15 crc kubenswrapper[4698]: I0216 00:10:15.842435 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a92430cf-e02d-41ee-862e-d785decce5ec-utilities\") pod \"a92430cf-e02d-41ee-862e-d785decce5ec\" (UID: \"a92430cf-e02d-41ee-862e-d785decce5ec\") " Feb 16 00:10:15 crc kubenswrapper[4698]: I0216 00:10:15.842591 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a92430cf-e02d-41ee-862e-d785decce5ec-catalog-content\") pod \"a92430cf-e02d-41ee-862e-d785decce5ec\" (UID: \"a92430cf-e02d-41ee-862e-d785decce5ec\") " Feb 16 00:10:15 crc kubenswrapper[4698]: I0216 00:10:15.842644 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kjhb\" (UniqueName: \"kubernetes.io/projected/a92430cf-e02d-41ee-862e-d785decce5ec-kube-api-access-9kjhb\") pod \"a92430cf-e02d-41ee-862e-d785decce5ec\" (UID: \"a92430cf-e02d-41ee-862e-d785decce5ec\") " Feb 16 00:10:15 crc kubenswrapper[4698]: I0216 00:10:15.843823 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a92430cf-e02d-41ee-862e-d785decce5ec-utilities" (OuterVolumeSpecName: "utilities") pod "a92430cf-e02d-41ee-862e-d785decce5ec" (UID: "a92430cf-e02d-41ee-862e-d785decce5ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:10:15 crc kubenswrapper[4698]: I0216 00:10:15.857058 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a92430cf-e02d-41ee-862e-d785decce5ec-kube-api-access-9kjhb" (OuterVolumeSpecName: "kube-api-access-9kjhb") pod "a92430cf-e02d-41ee-862e-d785decce5ec" (UID: "a92430cf-e02d-41ee-862e-d785decce5ec"). InnerVolumeSpecName "kube-api-access-9kjhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:10:15 crc kubenswrapper[4698]: I0216 00:10:15.907091 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a92430cf-e02d-41ee-862e-d785decce5ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a92430cf-e02d-41ee-862e-d785decce5ec" (UID: "a92430cf-e02d-41ee-862e-d785decce5ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:10:15 crc kubenswrapper[4698]: I0216 00:10:15.944757 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a92430cf-e02d-41ee-862e-d785decce5ec-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:15 crc kubenswrapper[4698]: I0216 00:10:15.944804 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kjhb\" (UniqueName: \"kubernetes.io/projected/a92430cf-e02d-41ee-862e-d785decce5ec-kube-api-access-9kjhb\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:15 crc kubenswrapper[4698]: I0216 00:10:15.944817 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a92430cf-e02d-41ee-862e-d785decce5ec-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:16 crc kubenswrapper[4698]: I0216 00:10:16.018876 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dclvg" Feb 16 00:10:16 crc kubenswrapper[4698]: I0216 00:10:16.018965 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dclvg" Feb 16 00:10:16 crc kubenswrapper[4698]: I0216 00:10:16.307854 4698 generic.go:334] "Generic (PLEG): container finished" podID="a92430cf-e02d-41ee-862e-d785decce5ec" containerID="0a32f0e7a3fc17bae7d66007a90110288c9cbc149b6b306348bc1ab43c7950ea" exitCode=0 Feb 16 00:10:16 crc kubenswrapper[4698]: I0216 00:10:16.307901 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxx2x" event={"ID":"a92430cf-e02d-41ee-862e-d785decce5ec","Type":"ContainerDied","Data":"0a32f0e7a3fc17bae7d66007a90110288c9cbc149b6b306348bc1ab43c7950ea"} Feb 16 00:10:16 crc kubenswrapper[4698]: I0216 00:10:16.307971 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxx2x" event={"ID":"a92430cf-e02d-41ee-862e-d785decce5ec","Type":"ContainerDied","Data":"3c4bf14c55ddd713b18a1f2f0ee813a843e26722c432189fe2702ae34e738be4"} Feb 16 00:10:16 crc kubenswrapper[4698]: I0216 00:10:16.307993 4698 scope.go:117] "RemoveContainer" containerID="0a32f0e7a3fc17bae7d66007a90110288c9cbc149b6b306348bc1ab43c7950ea" Feb 16 00:10:16 crc kubenswrapper[4698]: I0216 00:10:16.307933 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxx2x" Feb 16 00:10:16 crc kubenswrapper[4698]: I0216 00:10:16.324320 4698 scope.go:117] "RemoveContainer" containerID="da5a74be616cc31af0dce5259798195a0549e46fc5196d4b607ec785dfefcb3a" Feb 16 00:10:16 crc kubenswrapper[4698]: I0216 00:10:16.336733 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fxx2x"] Feb 16 00:10:16 crc kubenswrapper[4698]: I0216 00:10:16.339666 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fxx2x"] Feb 16 00:10:16 crc kubenswrapper[4698]: I0216 00:10:16.360904 4698 scope.go:117] "RemoveContainer" containerID="932e45a9e5aed578768414d709ca6585b928fbc27fcab303e139ae3223812c2c" Feb 16 00:10:16 crc kubenswrapper[4698]: I0216 00:10:16.376314 4698 scope.go:117] "RemoveContainer" containerID="0a32f0e7a3fc17bae7d66007a90110288c9cbc149b6b306348bc1ab43c7950ea" Feb 16 00:10:16 crc kubenswrapper[4698]: E0216 00:10:16.376945 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a32f0e7a3fc17bae7d66007a90110288c9cbc149b6b306348bc1ab43c7950ea\": container with ID starting with 0a32f0e7a3fc17bae7d66007a90110288c9cbc149b6b306348bc1ab43c7950ea not found: ID does not exist" containerID="0a32f0e7a3fc17bae7d66007a90110288c9cbc149b6b306348bc1ab43c7950ea" Feb 16 00:10:16 crc kubenswrapper[4698]: I0216 00:10:16.377015 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a32f0e7a3fc17bae7d66007a90110288c9cbc149b6b306348bc1ab43c7950ea"} err="failed to get container status \"0a32f0e7a3fc17bae7d66007a90110288c9cbc149b6b306348bc1ab43c7950ea\": rpc error: code = NotFound desc = could not find container \"0a32f0e7a3fc17bae7d66007a90110288c9cbc149b6b306348bc1ab43c7950ea\": container with ID starting with 0a32f0e7a3fc17bae7d66007a90110288c9cbc149b6b306348bc1ab43c7950ea not found: ID does not exist" Feb 16 00:10:16 crc kubenswrapper[4698]: I0216 00:10:16.377055 4698 scope.go:117] "RemoveContainer" containerID="da5a74be616cc31af0dce5259798195a0549e46fc5196d4b607ec785dfefcb3a" Feb 16 00:10:16 crc kubenswrapper[4698]: E0216 00:10:16.377632 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da5a74be616cc31af0dce5259798195a0549e46fc5196d4b607ec785dfefcb3a\": container with ID starting with da5a74be616cc31af0dce5259798195a0549e46fc5196d4b607ec785dfefcb3a not found: ID does not exist" containerID="da5a74be616cc31af0dce5259798195a0549e46fc5196d4b607ec785dfefcb3a" Feb 16 00:10:16 crc kubenswrapper[4698]: I0216 00:10:16.377690 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da5a74be616cc31af0dce5259798195a0549e46fc5196d4b607ec785dfefcb3a"} err="failed to get container status \"da5a74be616cc31af0dce5259798195a0549e46fc5196d4b607ec785dfefcb3a\": rpc error: code = NotFound desc = could not find container \"da5a74be616cc31af0dce5259798195a0549e46fc5196d4b607ec785dfefcb3a\": container with ID starting with da5a74be616cc31af0dce5259798195a0549e46fc5196d4b607ec785dfefcb3a not found: ID does not exist" Feb 16 00:10:16 crc kubenswrapper[4698]: I0216 00:10:16.377732 4698 scope.go:117] "RemoveContainer" containerID="932e45a9e5aed578768414d709ca6585b928fbc27fcab303e139ae3223812c2c" Feb 16 00:10:16 crc kubenswrapper[4698]: E0216 00:10:16.378199 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"932e45a9e5aed578768414d709ca6585b928fbc27fcab303e139ae3223812c2c\": container with ID starting with 932e45a9e5aed578768414d709ca6585b928fbc27fcab303e139ae3223812c2c not found: ID does not exist" containerID="932e45a9e5aed578768414d709ca6585b928fbc27fcab303e139ae3223812c2c" Feb 16 00:10:16 crc kubenswrapper[4698]: I0216 00:10:16.378234 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"932e45a9e5aed578768414d709ca6585b928fbc27fcab303e139ae3223812c2c"} err="failed to get container status \"932e45a9e5aed578768414d709ca6585b928fbc27fcab303e139ae3223812c2c\": rpc error: code = NotFound desc = could not find container \"932e45a9e5aed578768414d709ca6585b928fbc27fcab303e139ae3223812c2c\": container with ID starting with 932e45a9e5aed578768414d709ca6585b928fbc27fcab303e139ae3223812c2c not found: ID does not exist" Feb 16 00:10:16 crc kubenswrapper[4698]: I0216 00:10:16.710106 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bwn9c" podUID="194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c" containerName="registry-server" probeResult="failure" output=< Feb 16 00:10:16 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Feb 16 00:10:16 crc kubenswrapper[4698]: > Feb 16 00:10:17 crc kubenswrapper[4698]: I0216 00:10:17.053752 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dclvg" podUID="9888dd66-5ef5-499a-92a8-c9fd32335a20" containerName="registry-server" probeResult="failure" output=< Feb 16 00:10:17 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Feb 16 00:10:17 crc kubenswrapper[4698]: > Feb 16 00:10:17 crc kubenswrapper[4698]: I0216 00:10:17.239348 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a92430cf-e02d-41ee-862e-d785decce5ec" path="/var/lib/kubelet/pods/a92430cf-e02d-41ee-862e-d785decce5ec/volumes" Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.379754 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58f57f5dbb-bkkps"] Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.380779 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" podUID="38b00be0-37c7-41c1-b899-4a7a819fba96" containerName="controller-manager" containerID="cri-o://7d5993bdf3cc0f1cb90732d2f4e1f9694736cfa2eb70901143024f940f8a4a72" gracePeriod=30 Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.470664 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj"] Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.470898 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" podUID="5fbd7758-34a7-49b8-a669-53bf408520f3" containerName="route-controller-manager" containerID="cri-o://f2edd329e56e85979d6514e67e0a3bb80fb00b05dc44a752907aae8109ea1397" gracePeriod=30 Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.851105 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.900955 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.941331 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d4lp\" (UniqueName: \"kubernetes.io/projected/5fbd7758-34a7-49b8-a669-53bf408520f3-kube-api-access-8d4lp\") pod \"5fbd7758-34a7-49b8-a669-53bf408520f3\" (UID: \"5fbd7758-34a7-49b8-a669-53bf408520f3\") " Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.941423 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b00be0-37c7-41c1-b899-4a7a819fba96-serving-cert\") pod \"38b00be0-37c7-41c1-b899-4a7a819fba96\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.941474 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fbd7758-34a7-49b8-a669-53bf408520f3-config\") pod \"5fbd7758-34a7-49b8-a669-53bf408520f3\" (UID: \"5fbd7758-34a7-49b8-a669-53bf408520f3\") " Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.941525 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b00be0-37c7-41c1-b899-4a7a819fba96-client-ca\") pod \"38b00be0-37c7-41c1-b899-4a7a819fba96\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.941578 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38b00be0-37c7-41c1-b899-4a7a819fba96-proxy-ca-bundles\") pod \"38b00be0-37c7-41c1-b899-4a7a819fba96\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.941640 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b00be0-37c7-41c1-b899-4a7a819fba96-config\") pod \"38b00be0-37c7-41c1-b899-4a7a819fba96\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.941679 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fbd7758-34a7-49b8-a669-53bf408520f3-serving-cert\") pod \"5fbd7758-34a7-49b8-a669-53bf408520f3\" (UID: \"5fbd7758-34a7-49b8-a669-53bf408520f3\") " Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.941723 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55d88\" (UniqueName: \"kubernetes.io/projected/38b00be0-37c7-41c1-b899-4a7a819fba96-kube-api-access-55d88\") pod \"38b00be0-37c7-41c1-b899-4a7a819fba96\" (UID: \"38b00be0-37c7-41c1-b899-4a7a819fba96\") " Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.941770 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fbd7758-34a7-49b8-a669-53bf408520f3-client-ca\") pod \"5fbd7758-34a7-49b8-a669-53bf408520f3\" (UID: \"5fbd7758-34a7-49b8-a669-53bf408520f3\") " Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.942461 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fbd7758-34a7-49b8-a669-53bf408520f3-config" (OuterVolumeSpecName: "config") pod "5fbd7758-34a7-49b8-a669-53bf408520f3" (UID: "5fbd7758-34a7-49b8-a669-53bf408520f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.942869 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fbd7758-34a7-49b8-a669-53bf408520f3-client-ca" (OuterVolumeSpecName: "client-ca") pod "5fbd7758-34a7-49b8-a669-53bf408520f3" (UID: "5fbd7758-34a7-49b8-a669-53bf408520f3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.942984 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b00be0-37c7-41c1-b899-4a7a819fba96-config" (OuterVolumeSpecName: "config") pod "38b00be0-37c7-41c1-b899-4a7a819fba96" (UID: "38b00be0-37c7-41c1-b899-4a7a819fba96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.943358 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b00be0-37c7-41c1-b899-4a7a819fba96-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "38b00be0-37c7-41c1-b899-4a7a819fba96" (UID: "38b00be0-37c7-41c1-b899-4a7a819fba96"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.945897 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b00be0-37c7-41c1-b899-4a7a819fba96-client-ca" (OuterVolumeSpecName: "client-ca") pod "38b00be0-37c7-41c1-b899-4a7a819fba96" (UID: "38b00be0-37c7-41c1-b899-4a7a819fba96"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.948280 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b00be0-37c7-41c1-b899-4a7a819fba96-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "38b00be0-37c7-41c1-b899-4a7a819fba96" (UID: "38b00be0-37c7-41c1-b899-4a7a819fba96"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.948316 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fbd7758-34a7-49b8-a669-53bf408520f3-kube-api-access-8d4lp" (OuterVolumeSpecName: "kube-api-access-8d4lp") pod "5fbd7758-34a7-49b8-a669-53bf408520f3" (UID: "5fbd7758-34a7-49b8-a669-53bf408520f3"). InnerVolumeSpecName "kube-api-access-8d4lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.948948 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b00be0-37c7-41c1-b899-4a7a819fba96-kube-api-access-55d88" (OuterVolumeSpecName: "kube-api-access-55d88") pod "38b00be0-37c7-41c1-b899-4a7a819fba96" (UID: "38b00be0-37c7-41c1-b899-4a7a819fba96"). InnerVolumeSpecName "kube-api-access-55d88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:10:22 crc kubenswrapper[4698]: I0216 00:10:22.949137 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fbd7758-34a7-49b8-a669-53bf408520f3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5fbd7758-34a7-49b8-a669-53bf408520f3" (UID: "5fbd7758-34a7-49b8-a669-53bf408520f3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.033772 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xmh8j" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.043959 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b00be0-37c7-41c1-b899-4a7a819fba96-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.044000 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fbd7758-34a7-49b8-a669-53bf408520f3-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.044015 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b00be0-37c7-41c1-b899-4a7a819fba96-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.044027 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38b00be0-37c7-41c1-b899-4a7a819fba96-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.044040 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b00be0-37c7-41c1-b899-4a7a819fba96-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.044051 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fbd7758-34a7-49b8-a669-53bf408520f3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.044063 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55d88\" (UniqueName: \"kubernetes.io/projected/38b00be0-37c7-41c1-b899-4a7a819fba96-kube-api-access-55d88\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.044074 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fbd7758-34a7-49b8-a669-53bf408520f3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.044095 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d4lp\" (UniqueName: \"kubernetes.io/projected/5fbd7758-34a7-49b8-a669-53bf408520f3-kube-api-access-8d4lp\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.083692 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xmh8j"] Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.356104 4698 generic.go:334] "Generic (PLEG): container finished" podID="38b00be0-37c7-41c1-b899-4a7a819fba96" containerID="7d5993bdf3cc0f1cb90732d2f4e1f9694736cfa2eb70901143024f940f8a4a72" exitCode=0 Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.356238 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" event={"ID":"38b00be0-37c7-41c1-b899-4a7a819fba96","Type":"ContainerDied","Data":"7d5993bdf3cc0f1cb90732d2f4e1f9694736cfa2eb70901143024f940f8a4a72"} Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.356271 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" event={"ID":"38b00be0-37c7-41c1-b899-4a7a819fba96","Type":"ContainerDied","Data":"701a708922ff52a661cfae119580c733ef5402c27612b5ac3f7e0b7cd3191ed7"} Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.356298 4698 scope.go:117] "RemoveContainer" containerID="7d5993bdf3cc0f1cb90732d2f4e1f9694736cfa2eb70901143024f940f8a4a72" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.356480 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58f57f5dbb-bkkps" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.359481 4698 generic.go:334] "Generic (PLEG): container finished" podID="5fbd7758-34a7-49b8-a669-53bf408520f3" containerID="f2edd329e56e85979d6514e67e0a3bb80fb00b05dc44a752907aae8109ea1397" exitCode=0 Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.359756 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xmh8j" podUID="5308c07c-9d3d-4ead-8c6e-19c51adf5228" containerName="registry-server" containerID="cri-o://fa93237dc56d2ae572bfa909d591bc3226eaa86cfd3d7916d67a4909c790b9ee" gracePeriod=2 Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.360408 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.359841 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" event={"ID":"5fbd7758-34a7-49b8-a669-53bf408520f3","Type":"ContainerDied","Data":"f2edd329e56e85979d6514e67e0a3bb80fb00b05dc44a752907aae8109ea1397"} Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.360714 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj" event={"ID":"5fbd7758-34a7-49b8-a669-53bf408520f3","Type":"ContainerDied","Data":"d987443c34111faded13cf80714ddc43f771724359ed9438967a7c41ab560fae"} Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.392751 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj"] Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.401719 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787757c785-7bgpj"] Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.405459 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58f57f5dbb-bkkps"] Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.408318 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-58f57f5dbb-bkkps"] Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.408609 4698 scope.go:117] "RemoveContainer" containerID="7d5993bdf3cc0f1cb90732d2f4e1f9694736cfa2eb70901143024f940f8a4a72" Feb 16 00:10:23 crc kubenswrapper[4698]: E0216 00:10:23.409169 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d5993bdf3cc0f1cb90732d2f4e1f9694736cfa2eb70901143024f940f8a4a72\": container with ID starting with 7d5993bdf3cc0f1cb90732d2f4e1f9694736cfa2eb70901143024f940f8a4a72 not found: ID does not exist" containerID="7d5993bdf3cc0f1cb90732d2f4e1f9694736cfa2eb70901143024f940f8a4a72" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.409341 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5993bdf3cc0f1cb90732d2f4e1f9694736cfa2eb70901143024f940f8a4a72"} err="failed to get container status \"7d5993bdf3cc0f1cb90732d2f4e1f9694736cfa2eb70901143024f940f8a4a72\": rpc error: code = NotFound desc = could not find container \"7d5993bdf3cc0f1cb90732d2f4e1f9694736cfa2eb70901143024f940f8a4a72\": container with ID starting with 7d5993bdf3cc0f1cb90732d2f4e1f9694736cfa2eb70901143024f940f8a4a72 not found: ID does not exist" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.409464 4698 scope.go:117] "RemoveContainer" containerID="f2edd329e56e85979d6514e67e0a3bb80fb00b05dc44a752907aae8109ea1397" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.475330 4698 scope.go:117] "RemoveContainer" containerID="f2edd329e56e85979d6514e67e0a3bb80fb00b05dc44a752907aae8109ea1397" Feb 16 00:10:23 crc kubenswrapper[4698]: E0216 00:10:23.476925 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2edd329e56e85979d6514e67e0a3bb80fb00b05dc44a752907aae8109ea1397\": container with ID starting with f2edd329e56e85979d6514e67e0a3bb80fb00b05dc44a752907aae8109ea1397 not found: ID does not exist" containerID="f2edd329e56e85979d6514e67e0a3bb80fb00b05dc44a752907aae8109ea1397" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.476993 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2edd329e56e85979d6514e67e0a3bb80fb00b05dc44a752907aae8109ea1397"} err="failed to get container status \"f2edd329e56e85979d6514e67e0a3bb80fb00b05dc44a752907aae8109ea1397\": rpc error: code = NotFound desc = could not find container \"f2edd329e56e85979d6514e67e0a3bb80fb00b05dc44a752907aae8109ea1397\": container with ID starting with f2edd329e56e85979d6514e67e0a3bb80fb00b05dc44a752907aae8109ea1397 not found: ID does not exist" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.549048 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55f66f7ccb-wxg45"] Feb 16 00:10:23 crc kubenswrapper[4698]: E0216 00:10:23.549460 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65952246-da5c-4f4c-bb5a-a0b236b3675f" containerName="registry-server" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.549495 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="65952246-da5c-4f4c-bb5a-a0b236b3675f" containerName="registry-server" Feb 16 00:10:23 crc kubenswrapper[4698]: E0216 00:10:23.549521 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92430cf-e02d-41ee-862e-d785decce5ec" containerName="extract-content" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.549535 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92430cf-e02d-41ee-862e-d785decce5ec" containerName="extract-content" Feb 16 00:10:23 crc kubenswrapper[4698]: E0216 00:10:23.549551 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65952246-da5c-4f4c-bb5a-a0b236b3675f" containerName="extract-content" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.549564 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="65952246-da5c-4f4c-bb5a-a0b236b3675f" containerName="extract-content" Feb 16 00:10:23 crc kubenswrapper[4698]: E0216 00:10:23.549586 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b00be0-37c7-41c1-b899-4a7a819fba96" containerName="controller-manager" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.549599 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b00be0-37c7-41c1-b899-4a7a819fba96" containerName="controller-manager" Feb 16 00:10:23 crc kubenswrapper[4698]: E0216 00:10:23.549668 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92430cf-e02d-41ee-862e-d785decce5ec" containerName="extract-utilities" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.549681 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92430cf-e02d-41ee-862e-d785decce5ec" containerName="extract-utilities" Feb 16 00:10:23 crc kubenswrapper[4698]: E0216 00:10:23.549694 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92430cf-e02d-41ee-862e-d785decce5ec" containerName="registry-server" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.549709 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92430cf-e02d-41ee-862e-d785decce5ec" containerName="registry-server" Feb 16 00:10:23 crc kubenswrapper[4698]: E0216 00:10:23.549724 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65952246-da5c-4f4c-bb5a-a0b236b3675f" containerName="extract-utilities" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.549813 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="65952246-da5c-4f4c-bb5a-a0b236b3675f" containerName="extract-utilities" Feb 16 00:10:23 crc kubenswrapper[4698]: E0216 00:10:23.549830 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fbd7758-34a7-49b8-a669-53bf408520f3" containerName="route-controller-manager" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.549842 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fbd7758-34a7-49b8-a669-53bf408520f3" containerName="route-controller-manager" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.552807 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fbd7758-34a7-49b8-a669-53bf408520f3" containerName="route-controller-manager" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.552877 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b00be0-37c7-41c1-b899-4a7a819fba96" containerName="controller-manager" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.552893 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a92430cf-e02d-41ee-862e-d785decce5ec" containerName="registry-server" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.552904 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="65952246-da5c-4f4c-bb5a-a0b236b3675f" containerName="registry-server" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.553469 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.557802 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9"] Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.558054 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.558321 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.558673 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.558810 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.559394 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.562823 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.563384 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.563853 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.564125 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.564397 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.564578 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.564770 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.565124 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.572603 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55f66f7ccb-wxg45"] Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.574833 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.580282 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9"] Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.655057 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74a0358a-8c91-42a5-9763-83f17e4fd05d-client-ca\") pod \"controller-manager-55f66f7ccb-wxg45\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.655096 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74a0358a-8c91-42a5-9763-83f17e4fd05d-proxy-ca-bundles\") pod \"controller-manager-55f66f7ccb-wxg45\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.655140 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbvjh\" (UniqueName: \"kubernetes.io/projected/74a0358a-8c91-42a5-9763-83f17e4fd05d-kube-api-access-jbvjh\") pod \"controller-manager-55f66f7ccb-wxg45\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.655165 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae4b48ff-e232-4503-94ef-8acfdf2479f2-serving-cert\") pod \"route-controller-manager-fd6b584d4-s6hj9\" (UID: \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\") " pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.655192 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74a0358a-8c91-42a5-9763-83f17e4fd05d-config\") pod \"controller-manager-55f66f7ccb-wxg45\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.655225 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdkps\" (UniqueName: \"kubernetes.io/projected/ae4b48ff-e232-4503-94ef-8acfdf2479f2-kube-api-access-xdkps\") pod \"route-controller-manager-fd6b584d4-s6hj9\" (UID: \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\") " pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.655265 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae4b48ff-e232-4503-94ef-8acfdf2479f2-config\") pod \"route-controller-manager-fd6b584d4-s6hj9\" (UID: \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\") " pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.655295 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74a0358a-8c91-42a5-9763-83f17e4fd05d-serving-cert\") pod \"controller-manager-55f66f7ccb-wxg45\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.655324 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae4b48ff-e232-4503-94ef-8acfdf2479f2-client-ca\") pod \"route-controller-manager-fd6b584d4-s6hj9\" (UID: \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\") " pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.756594 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74a0358a-8c91-42a5-9763-83f17e4fd05d-client-ca\") pod \"controller-manager-55f66f7ccb-wxg45\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.756691 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74a0358a-8c91-42a5-9763-83f17e4fd05d-proxy-ca-bundles\") pod \"controller-manager-55f66f7ccb-wxg45\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.756748 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbvjh\" (UniqueName: \"kubernetes.io/projected/74a0358a-8c91-42a5-9763-83f17e4fd05d-kube-api-access-jbvjh\") pod \"controller-manager-55f66f7ccb-wxg45\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.756782 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74a0358a-8c91-42a5-9763-83f17e4fd05d-config\") pod \"controller-manager-55f66f7ccb-wxg45\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.756805 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae4b48ff-e232-4503-94ef-8acfdf2479f2-serving-cert\") pod \"route-controller-manager-fd6b584d4-s6hj9\" (UID: \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\") " pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.756843 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdkps\" (UniqueName: \"kubernetes.io/projected/ae4b48ff-e232-4503-94ef-8acfdf2479f2-kube-api-access-xdkps\") pod \"route-controller-manager-fd6b584d4-s6hj9\" (UID: \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\") " pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.756883 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae4b48ff-e232-4503-94ef-8acfdf2479f2-config\") pod \"route-controller-manager-fd6b584d4-s6hj9\" (UID: \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\") " pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.756912 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74a0358a-8c91-42a5-9763-83f17e4fd05d-serving-cert\") pod \"controller-manager-55f66f7ccb-wxg45\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.756935 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae4b48ff-e232-4503-94ef-8acfdf2479f2-client-ca\") pod \"route-controller-manager-fd6b584d4-s6hj9\" (UID: \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\") " pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.757873 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74a0358a-8c91-42a5-9763-83f17e4fd05d-client-ca\") pod \"controller-manager-55f66f7ccb-wxg45\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.757948 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae4b48ff-e232-4503-94ef-8acfdf2479f2-client-ca\") pod \"route-controller-manager-fd6b584d4-s6hj9\" (UID: \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\") " pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.758873 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74a0358a-8c91-42a5-9763-83f17e4fd05d-config\") pod \"controller-manager-55f66f7ccb-wxg45\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.759161 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74a0358a-8c91-42a5-9763-83f17e4fd05d-proxy-ca-bundles\") pod \"controller-manager-55f66f7ccb-wxg45\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.759582 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae4b48ff-e232-4503-94ef-8acfdf2479f2-config\") pod \"route-controller-manager-fd6b584d4-s6hj9\" (UID: \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\") " pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.761844 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74a0358a-8c91-42a5-9763-83f17e4fd05d-serving-cert\") pod \"controller-manager-55f66f7ccb-wxg45\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.762293 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae4b48ff-e232-4503-94ef-8acfdf2479f2-serving-cert\") pod \"route-controller-manager-fd6b584d4-s6hj9\" (UID: \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\") " pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.769692 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xmh8j" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.779241 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbvjh\" (UniqueName: \"kubernetes.io/projected/74a0358a-8c91-42a5-9763-83f17e4fd05d-kube-api-access-jbvjh\") pod \"controller-manager-55f66f7ccb-wxg45\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.780149 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdkps\" (UniqueName: \"kubernetes.io/projected/ae4b48ff-e232-4503-94ef-8acfdf2479f2-kube-api-access-xdkps\") pod \"route-controller-manager-fd6b584d4-s6hj9\" (UID: \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\") " pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.857948 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5308c07c-9d3d-4ead-8c6e-19c51adf5228-utilities\") pod \"5308c07c-9d3d-4ead-8c6e-19c51adf5228\" (UID: \"5308c07c-9d3d-4ead-8c6e-19c51adf5228\") " Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.858315 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5308c07c-9d3d-4ead-8c6e-19c51adf5228-catalog-content\") pod \"5308c07c-9d3d-4ead-8c6e-19c51adf5228\" (UID: \"5308c07c-9d3d-4ead-8c6e-19c51adf5228\") " Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.858455 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqhnt\" (UniqueName: \"kubernetes.io/projected/5308c07c-9d3d-4ead-8c6e-19c51adf5228-kube-api-access-pqhnt\") pod \"5308c07c-9d3d-4ead-8c6e-19c51adf5228\" (UID: \"5308c07c-9d3d-4ead-8c6e-19c51adf5228\") " Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.859172 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5308c07c-9d3d-4ead-8c6e-19c51adf5228-utilities" (OuterVolumeSpecName: "utilities") pod "5308c07c-9d3d-4ead-8c6e-19c51adf5228" (UID: "5308c07c-9d3d-4ead-8c6e-19c51adf5228"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.862918 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5308c07c-9d3d-4ead-8c6e-19c51adf5228-kube-api-access-pqhnt" (OuterVolumeSpecName: "kube-api-access-pqhnt") pod "5308c07c-9d3d-4ead-8c6e-19c51adf5228" (UID: "5308c07c-9d3d-4ead-8c6e-19c51adf5228"). InnerVolumeSpecName "kube-api-access-pqhnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.875456 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.886475 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.915586 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5308c07c-9d3d-4ead-8c6e-19c51adf5228-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5308c07c-9d3d-4ead-8c6e-19c51adf5228" (UID: "5308c07c-9d3d-4ead-8c6e-19c51adf5228"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.960408 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5308c07c-9d3d-4ead-8c6e-19c51adf5228-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.960453 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5308c07c-9d3d-4ead-8c6e-19c51adf5228-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:23 crc kubenswrapper[4698]: I0216 00:10:23.960477 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqhnt\" (UniqueName: \"kubernetes.io/projected/5308c07c-9d3d-4ead-8c6e-19c51adf5228-kube-api-access-pqhnt\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:24 crc kubenswrapper[4698]: I0216 00:10:24.362811 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55f66f7ccb-wxg45"] Feb 16 00:10:24 crc kubenswrapper[4698]: W0216 00:10:24.368737 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74a0358a_8c91_42a5_9763_83f17e4fd05d.slice/crio-a9e127b30e87800980444509b875027fa4a8187f57e67379a67397a80f8b9941 WatchSource:0}: Error finding container a9e127b30e87800980444509b875027fa4a8187f57e67379a67397a80f8b9941: Status 404 returned error can't find the container with id a9e127b30e87800980444509b875027fa4a8187f57e67379a67397a80f8b9941 Feb 16 00:10:24 crc kubenswrapper[4698]: I0216 00:10:24.373986 4698 generic.go:334] "Generic (PLEG): container finished" podID="5308c07c-9d3d-4ead-8c6e-19c51adf5228" containerID="fa93237dc56d2ae572bfa909d591bc3226eaa86cfd3d7916d67a4909c790b9ee" exitCode=0 Feb 16 00:10:24 crc kubenswrapper[4698]: I0216 00:10:24.374030 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xmh8j" event={"ID":"5308c07c-9d3d-4ead-8c6e-19c51adf5228","Type":"ContainerDied","Data":"fa93237dc56d2ae572bfa909d591bc3226eaa86cfd3d7916d67a4909c790b9ee"} Feb 16 00:10:24 crc kubenswrapper[4698]: I0216 00:10:24.374065 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xmh8j" event={"ID":"5308c07c-9d3d-4ead-8c6e-19c51adf5228","Type":"ContainerDied","Data":"8ace63010c1971da1ab97e319022c11a199351fa34a26351386234fa0827b901"} Feb 16 00:10:24 crc kubenswrapper[4698]: I0216 00:10:24.374088 4698 scope.go:117] "RemoveContainer" containerID="fa93237dc56d2ae572bfa909d591bc3226eaa86cfd3d7916d67a4909c790b9ee" Feb 16 00:10:24 crc kubenswrapper[4698]: I0216 00:10:24.374107 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xmh8j" Feb 16 00:10:24 crc kubenswrapper[4698]: I0216 00:10:24.376864 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9"] Feb 16 00:10:24 crc kubenswrapper[4698]: I0216 00:10:24.404318 4698 scope.go:117] "RemoveContainer" containerID="eaf24530e6f3bb8cd17e72d79f48ab88c935c404701c292b589fcc2fe9361bbc" Feb 16 00:10:24 crc kubenswrapper[4698]: I0216 00:10:24.410998 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xmh8j"] Feb 16 00:10:24 crc kubenswrapper[4698]: I0216 00:10:24.414609 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xmh8j"] Feb 16 00:10:24 crc kubenswrapper[4698]: I0216 00:10:24.449048 4698 scope.go:117] "RemoveContainer" containerID="4ad3368180fc3c29dff2c4a28fa535fdda0bb75b558ceee474788142e3e6d688" Feb 16 00:10:24 crc kubenswrapper[4698]: I0216 00:10:24.484088 4698 scope.go:117] "RemoveContainer" containerID="fa93237dc56d2ae572bfa909d591bc3226eaa86cfd3d7916d67a4909c790b9ee" Feb 16 00:10:24 crc kubenswrapper[4698]: E0216 00:10:24.487135 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa93237dc56d2ae572bfa909d591bc3226eaa86cfd3d7916d67a4909c790b9ee\": container with ID starting with fa93237dc56d2ae572bfa909d591bc3226eaa86cfd3d7916d67a4909c790b9ee not found: ID does not exist" containerID="fa93237dc56d2ae572bfa909d591bc3226eaa86cfd3d7916d67a4909c790b9ee" Feb 16 00:10:24 crc kubenswrapper[4698]: I0216 00:10:24.487217 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa93237dc56d2ae572bfa909d591bc3226eaa86cfd3d7916d67a4909c790b9ee"} err="failed to get container status \"fa93237dc56d2ae572bfa909d591bc3226eaa86cfd3d7916d67a4909c790b9ee\": rpc error: code = NotFound desc = could not find container \"fa93237dc56d2ae572bfa909d591bc3226eaa86cfd3d7916d67a4909c790b9ee\": container with ID starting with fa93237dc56d2ae572bfa909d591bc3226eaa86cfd3d7916d67a4909c790b9ee not found: ID does not exist" Feb 16 00:10:24 crc kubenswrapper[4698]: I0216 00:10:24.487283 4698 scope.go:117] "RemoveContainer" containerID="eaf24530e6f3bb8cd17e72d79f48ab88c935c404701c292b589fcc2fe9361bbc" Feb 16 00:10:24 crc kubenswrapper[4698]: E0216 00:10:24.487840 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaf24530e6f3bb8cd17e72d79f48ab88c935c404701c292b589fcc2fe9361bbc\": container with ID starting with eaf24530e6f3bb8cd17e72d79f48ab88c935c404701c292b589fcc2fe9361bbc not found: ID does not exist" containerID="eaf24530e6f3bb8cd17e72d79f48ab88c935c404701c292b589fcc2fe9361bbc" Feb 16 00:10:24 crc kubenswrapper[4698]: I0216 00:10:24.487900 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf24530e6f3bb8cd17e72d79f48ab88c935c404701c292b589fcc2fe9361bbc"} err="failed to get container status \"eaf24530e6f3bb8cd17e72d79f48ab88c935c404701c292b589fcc2fe9361bbc\": rpc error: code = NotFound desc = could not find container \"eaf24530e6f3bb8cd17e72d79f48ab88c935c404701c292b589fcc2fe9361bbc\": container with ID starting with eaf24530e6f3bb8cd17e72d79f48ab88c935c404701c292b589fcc2fe9361bbc not found: ID does not exist" Feb 16 00:10:24 crc kubenswrapper[4698]: I0216 00:10:24.487924 4698 scope.go:117] "RemoveContainer" containerID="4ad3368180fc3c29dff2c4a28fa535fdda0bb75b558ceee474788142e3e6d688" Feb 16 00:10:24 crc kubenswrapper[4698]: E0216 00:10:24.488326 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ad3368180fc3c29dff2c4a28fa535fdda0bb75b558ceee474788142e3e6d688\": container with ID starting with 4ad3368180fc3c29dff2c4a28fa535fdda0bb75b558ceee474788142e3e6d688 not found: ID does not exist" containerID="4ad3368180fc3c29dff2c4a28fa535fdda0bb75b558ceee474788142e3e6d688" Feb 16 00:10:24 crc kubenswrapper[4698]: I0216 00:10:24.488387 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ad3368180fc3c29dff2c4a28fa535fdda0bb75b558ceee474788142e3e6d688"} err="failed to get container status \"4ad3368180fc3c29dff2c4a28fa535fdda0bb75b558ceee474788142e3e6d688\": rpc error: code = NotFound desc = could not find container \"4ad3368180fc3c29dff2c4a28fa535fdda0bb75b558ceee474788142e3e6d688\": container with ID starting with 4ad3368180fc3c29dff2c4a28fa535fdda0bb75b558ceee474788142e3e6d688 not found: ID does not exist" Feb 16 00:10:25 crc kubenswrapper[4698]: I0216 00:10:25.237962 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38b00be0-37c7-41c1-b899-4a7a819fba96" path="/var/lib/kubelet/pods/38b00be0-37c7-41c1-b899-4a7a819fba96/volumes" Feb 16 00:10:25 crc kubenswrapper[4698]: I0216 00:10:25.238838 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5308c07c-9d3d-4ead-8c6e-19c51adf5228" path="/var/lib/kubelet/pods/5308c07c-9d3d-4ead-8c6e-19c51adf5228/volumes" Feb 16 00:10:25 crc kubenswrapper[4698]: I0216 00:10:25.239477 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fbd7758-34a7-49b8-a669-53bf408520f3" path="/var/lib/kubelet/pods/5fbd7758-34a7-49b8-a669-53bf408520f3/volumes" Feb 16 00:10:25 crc kubenswrapper[4698]: I0216 00:10:25.381982 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" event={"ID":"ae4b48ff-e232-4503-94ef-8acfdf2479f2","Type":"ContainerStarted","Data":"cb02c744cadf5aca9a2d93dc156a2001de075d82f5b9609557610d270c021b3c"} Feb 16 00:10:25 crc kubenswrapper[4698]: I0216 00:10:25.382038 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" event={"ID":"ae4b48ff-e232-4503-94ef-8acfdf2479f2","Type":"ContainerStarted","Data":"930db59cbb3e6bcd8a0923b97786a14f494d95a51a2eea0db1fc424f07684088"} Feb 16 00:10:25 crc kubenswrapper[4698]: I0216 00:10:25.382194 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" Feb 16 00:10:25 crc kubenswrapper[4698]: I0216 00:10:25.386525 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" event={"ID":"74a0358a-8c91-42a5-9763-83f17e4fd05d","Type":"ContainerStarted","Data":"77428bdc910e384129280ce7b040437109b263cf2f74ce66a52cd01f387940c1"} Feb 16 00:10:25 crc kubenswrapper[4698]: I0216 00:10:25.387121 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:10:25 crc kubenswrapper[4698]: I0216 00:10:25.387146 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" event={"ID":"74a0358a-8c91-42a5-9763-83f17e4fd05d","Type":"ContainerStarted","Data":"a9e127b30e87800980444509b875027fa4a8187f57e67379a67397a80f8b9941"} Feb 16 00:10:25 crc kubenswrapper[4698]: I0216 00:10:25.387657 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" Feb 16 00:10:25 crc kubenswrapper[4698]: I0216 00:10:25.394721 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:10:25 crc kubenswrapper[4698]: I0216 00:10:25.404359 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" podStartSLOduration=3.404318047 podStartE2EDuration="3.404318047s" podCreationTimestamp="2026-02-16 00:10:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:10:25.401968565 +0000 UTC m=+235.059867327" watchObservedRunningTime="2026-02-16 00:10:25.404318047 +0000 UTC m=+235.062216809" Feb 16 00:10:25 crc kubenswrapper[4698]: I0216 00:10:25.423370 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" podStartSLOduration=3.423348185 podStartE2EDuration="3.423348185s" podCreationTimestamp="2026-02-16 00:10:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:10:25.418057322 +0000 UTC m=+235.075956084" watchObservedRunningTime="2026-02-16 00:10:25.423348185 +0000 UTC m=+235.081246947" Feb 16 00:10:25 crc kubenswrapper[4698]: I0216 00:10:25.710232 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bwn9c" Feb 16 00:10:25 crc kubenswrapper[4698]: I0216 00:10:25.756360 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bwn9c" Feb 16 00:10:26 crc kubenswrapper[4698]: I0216 00:10:26.065178 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dclvg" Feb 16 00:10:26 crc kubenswrapper[4698]: I0216 00:10:26.110494 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dclvg" Feb 16 00:10:27 crc kubenswrapper[4698]: I0216 00:10:27.045664 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:10:27 crc kubenswrapper[4698]: I0216 00:10:27.045727 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:10:27 crc kubenswrapper[4698]: I0216 00:10:27.045787 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:10:27 crc kubenswrapper[4698]: I0216 00:10:27.046441 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8"} pod="openshift-machine-config-operator/machine-config-daemon-z56m2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 00:10:27 crc kubenswrapper[4698]: I0216 00:10:27.046513 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" containerID="cri-o://ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8" gracePeriod=600 Feb 16 00:10:27 crc kubenswrapper[4698]: I0216 00:10:27.401222 4698 generic.go:334] "Generic (PLEG): container finished" podID="7b351654-277f-4d0d-84f9-b003f934936c" containerID="ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8" exitCode=0 Feb 16 00:10:27 crc kubenswrapper[4698]: I0216 00:10:27.401342 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" event={"ID":"7b351654-277f-4d0d-84f9-b003f934936c","Type":"ContainerDied","Data":"ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8"} Feb 16 00:10:27 crc kubenswrapper[4698]: I0216 00:10:27.401747 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" event={"ID":"7b351654-277f-4d0d-84f9-b003f934936c","Type":"ContainerStarted","Data":"4c48928118e373a7f22de0377bb5928d81fc331d1ecea88c18cef22f90c1e4a6"} Feb 16 00:10:27 crc kubenswrapper[4698]: I0216 00:10:27.546164 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" podUID="d5ba3ac6-6e8d-4965-81f1-c1805efed27f" containerName="oauth-openshift" containerID="cri-o://64b357295a4aff3e20539057d5bf09b8b7005fb53711468ce327ccc8ef917211" gracePeriod=15 Feb 16 00:10:27 crc kubenswrapper[4698]: I0216 00:10:27.973812 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.030037 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-trusted-ca-bundle\") pod \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.030135 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-service-ca\") pod \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.030184 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-router-certs\") pod \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.030222 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-audit-dir\") pod \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.030249 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-template-provider-selection\") pod \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.030289 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-serving-cert\") pod \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.030318 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-audit-policies\") pod \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.030345 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxng9\" (UniqueName: \"kubernetes.io/projected/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-kube-api-access-pxng9\") pod \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.030408 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-idp-0-file-data\") pod \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.030443 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-template-error\") pod \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.030473 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-template-login\") pod \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.030506 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-cliconfig\") pod \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.030528 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-session\") pod \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.030560 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-ocp-branding-template\") pod \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\" (UID: \"d5ba3ac6-6e8d-4965-81f1-c1805efed27f\") " Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.031399 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d5ba3ac6-6e8d-4965-81f1-c1805efed27f" (UID: "d5ba3ac6-6e8d-4965-81f1-c1805efed27f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.031556 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d5ba3ac6-6e8d-4965-81f1-c1805efed27f" (UID: "d5ba3ac6-6e8d-4965-81f1-c1805efed27f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.031920 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.031965 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.033916 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d5ba3ac6-6e8d-4965-81f1-c1805efed27f" (UID: "d5ba3ac6-6e8d-4965-81f1-c1805efed27f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.034505 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d5ba3ac6-6e8d-4965-81f1-c1805efed27f" (UID: "d5ba3ac6-6e8d-4965-81f1-c1805efed27f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.034752 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d5ba3ac6-6e8d-4965-81f1-c1805efed27f" (UID: "d5ba3ac6-6e8d-4965-81f1-c1805efed27f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.039588 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d5ba3ac6-6e8d-4965-81f1-c1805efed27f" (UID: "d5ba3ac6-6e8d-4965-81f1-c1805efed27f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.040037 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d5ba3ac6-6e8d-4965-81f1-c1805efed27f" (UID: "d5ba3ac6-6e8d-4965-81f1-c1805efed27f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.040219 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d5ba3ac6-6e8d-4965-81f1-c1805efed27f" (UID: "d5ba3ac6-6e8d-4965-81f1-c1805efed27f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.040451 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d5ba3ac6-6e8d-4965-81f1-c1805efed27f" (UID: "d5ba3ac6-6e8d-4965-81f1-c1805efed27f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.040558 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-kube-api-access-pxng9" (OuterVolumeSpecName: "kube-api-access-pxng9") pod "d5ba3ac6-6e8d-4965-81f1-c1805efed27f" (UID: "d5ba3ac6-6e8d-4965-81f1-c1805efed27f"). InnerVolumeSpecName "kube-api-access-pxng9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.041517 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d5ba3ac6-6e8d-4965-81f1-c1805efed27f" (UID: "d5ba3ac6-6e8d-4965-81f1-c1805efed27f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.041725 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d5ba3ac6-6e8d-4965-81f1-c1805efed27f" (UID: "d5ba3ac6-6e8d-4965-81f1-c1805efed27f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.042035 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d5ba3ac6-6e8d-4965-81f1-c1805efed27f" (UID: "d5ba3ac6-6e8d-4965-81f1-c1805efed27f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.042052 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d5ba3ac6-6e8d-4965-81f1-c1805efed27f" (UID: "d5ba3ac6-6e8d-4965-81f1-c1805efed27f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.134002 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.134051 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.134064 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.134077 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.134090 4698 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.134101 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.134114 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.134124 4698 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.134133 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxng9\" (UniqueName: \"kubernetes.io/projected/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-kube-api-access-pxng9\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.134144 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.134153 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.134166 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d5ba3ac6-6e8d-4965-81f1-c1805efed27f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.418668 4698 generic.go:334] "Generic (PLEG): container finished" podID="d5ba3ac6-6e8d-4965-81f1-c1805efed27f" containerID="64b357295a4aff3e20539057d5bf09b8b7005fb53711468ce327ccc8ef917211" exitCode=0 Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.418738 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" event={"ID":"d5ba3ac6-6e8d-4965-81f1-c1805efed27f","Type":"ContainerDied","Data":"64b357295a4aff3e20539057d5bf09b8b7005fb53711468ce327ccc8ef917211"} Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.418782 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" event={"ID":"d5ba3ac6-6e8d-4965-81f1-c1805efed27f","Type":"ContainerDied","Data":"d8a5e1806b7e314717281d8b7adee3a4bde4af26475b3d2124ae4da83e0edd75"} Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.418812 4698 scope.go:117] "RemoveContainer" containerID="64b357295a4aff3e20539057d5bf09b8b7005fb53711468ce327ccc8ef917211" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.419054 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ckzvr" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.463758 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ckzvr"] Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.463766 4698 scope.go:117] "RemoveContainer" containerID="64b357295a4aff3e20539057d5bf09b8b7005fb53711468ce327ccc8ef917211" Feb 16 00:10:28 crc kubenswrapper[4698]: E0216 00:10:28.464587 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64b357295a4aff3e20539057d5bf09b8b7005fb53711468ce327ccc8ef917211\": container with ID starting with 64b357295a4aff3e20539057d5bf09b8b7005fb53711468ce327ccc8ef917211 not found: ID does not exist" containerID="64b357295a4aff3e20539057d5bf09b8b7005fb53711468ce327ccc8ef917211" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.464683 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64b357295a4aff3e20539057d5bf09b8b7005fb53711468ce327ccc8ef917211"} err="failed to get container status \"64b357295a4aff3e20539057d5bf09b8b7005fb53711468ce327ccc8ef917211\": rpc error: code = NotFound desc = could not find container \"64b357295a4aff3e20539057d5bf09b8b7005fb53711468ce327ccc8ef917211\": container with ID starting with 64b357295a4aff3e20539057d5bf09b8b7005fb53711468ce327ccc8ef917211 not found: ID does not exist" Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.470077 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ckzvr"] Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.670576 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dclvg"] Feb 16 00:10:28 crc kubenswrapper[4698]: I0216 00:10:28.670884 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dclvg" podUID="9888dd66-5ef5-499a-92a8-c9fd32335a20" containerName="registry-server" containerID="cri-o://5166dbda72366b53f2691160c08594350673bf96e92d216ca5b2fd86b81ec6fc" gracePeriod=2 Feb 16 00:10:29 crc kubenswrapper[4698]: I0216 00:10:29.240396 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ba3ac6-6e8d-4965-81f1-c1805efed27f" path="/var/lib/kubelet/pods/d5ba3ac6-6e8d-4965-81f1-c1805efed27f/volumes" Feb 16 00:10:29 crc kubenswrapper[4698]: I0216 00:10:29.430853 4698 generic.go:334] "Generic (PLEG): container finished" podID="9888dd66-5ef5-499a-92a8-c9fd32335a20" containerID="5166dbda72366b53f2691160c08594350673bf96e92d216ca5b2fd86b81ec6fc" exitCode=0 Feb 16 00:10:29 crc kubenswrapper[4698]: I0216 00:10:29.430964 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dclvg" event={"ID":"9888dd66-5ef5-499a-92a8-c9fd32335a20","Type":"ContainerDied","Data":"5166dbda72366b53f2691160c08594350673bf96e92d216ca5b2fd86b81ec6fc"} Feb 16 00:10:29 crc kubenswrapper[4698]: I0216 00:10:29.774818 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dclvg" Feb 16 00:10:29 crc kubenswrapper[4698]: I0216 00:10:29.856158 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9888dd66-5ef5-499a-92a8-c9fd32335a20-catalog-content\") pod \"9888dd66-5ef5-499a-92a8-c9fd32335a20\" (UID: \"9888dd66-5ef5-499a-92a8-c9fd32335a20\") " Feb 16 00:10:29 crc kubenswrapper[4698]: I0216 00:10:29.856284 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9888dd66-5ef5-499a-92a8-c9fd32335a20-utilities\") pod \"9888dd66-5ef5-499a-92a8-c9fd32335a20\" (UID: \"9888dd66-5ef5-499a-92a8-c9fd32335a20\") " Feb 16 00:10:29 crc kubenswrapper[4698]: I0216 00:10:29.856401 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpgz5\" (UniqueName: \"kubernetes.io/projected/9888dd66-5ef5-499a-92a8-c9fd32335a20-kube-api-access-cpgz5\") pod \"9888dd66-5ef5-499a-92a8-c9fd32335a20\" (UID: \"9888dd66-5ef5-499a-92a8-c9fd32335a20\") " Feb 16 00:10:29 crc kubenswrapper[4698]: I0216 00:10:29.857367 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9888dd66-5ef5-499a-92a8-c9fd32335a20-utilities" (OuterVolumeSpecName: "utilities") pod "9888dd66-5ef5-499a-92a8-c9fd32335a20" (UID: "9888dd66-5ef5-499a-92a8-c9fd32335a20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:10:29 crc kubenswrapper[4698]: I0216 00:10:29.865463 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9888dd66-5ef5-499a-92a8-c9fd32335a20-kube-api-access-cpgz5" (OuterVolumeSpecName: "kube-api-access-cpgz5") pod "9888dd66-5ef5-499a-92a8-c9fd32335a20" (UID: "9888dd66-5ef5-499a-92a8-c9fd32335a20"). InnerVolumeSpecName "kube-api-access-cpgz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:10:29 crc kubenswrapper[4698]: I0216 00:10:29.958407 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9888dd66-5ef5-499a-92a8-c9fd32335a20-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:29 crc kubenswrapper[4698]: I0216 00:10:29.958460 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpgz5\" (UniqueName: \"kubernetes.io/projected/9888dd66-5ef5-499a-92a8-c9fd32335a20-kube-api-access-cpgz5\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.015728 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9888dd66-5ef5-499a-92a8-c9fd32335a20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9888dd66-5ef5-499a-92a8-c9fd32335a20" (UID: "9888dd66-5ef5-499a-92a8-c9fd32335a20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.061535 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9888dd66-5ef5-499a-92a8-c9fd32335a20-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.449084 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dclvg" event={"ID":"9888dd66-5ef5-499a-92a8-c9fd32335a20","Type":"ContainerDied","Data":"b2156e14dff14308a7a3f0405323fe70039d22deb6351935091edfc13c74e19e"} Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.449165 4698 scope.go:117] "RemoveContainer" containerID="5166dbda72366b53f2691160c08594350673bf96e92d216ca5b2fd86b81ec6fc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.449524 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dclvg" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.484257 4698 scope.go:117] "RemoveContainer" containerID="5d4143de8b33d9628ff70545d6933825009c9851d891a8f1736e89ddc2be31a8" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.489508 4698 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 00:10:30 crc kubenswrapper[4698]: E0216 00:10:30.489907 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5308c07c-9d3d-4ead-8c6e-19c51adf5228" containerName="registry-server" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.489932 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5308c07c-9d3d-4ead-8c6e-19c51adf5228" containerName="registry-server" Feb 16 00:10:30 crc kubenswrapper[4698]: E0216 00:10:30.489950 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5308c07c-9d3d-4ead-8c6e-19c51adf5228" containerName="extract-content" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.489960 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5308c07c-9d3d-4ead-8c6e-19c51adf5228" containerName="extract-content" Feb 16 00:10:30 crc kubenswrapper[4698]: E0216 00:10:30.489969 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9888dd66-5ef5-499a-92a8-c9fd32335a20" containerName="extract-utilities" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.489977 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9888dd66-5ef5-499a-92a8-c9fd32335a20" containerName="extract-utilities" Feb 16 00:10:30 crc kubenswrapper[4698]: E0216 00:10:30.490019 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ba3ac6-6e8d-4965-81f1-c1805efed27f" containerName="oauth-openshift" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.490029 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ba3ac6-6e8d-4965-81f1-c1805efed27f" containerName="oauth-openshift" Feb 16 00:10:30 crc kubenswrapper[4698]: E0216 00:10:30.490038 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5308c07c-9d3d-4ead-8c6e-19c51adf5228" containerName="extract-utilities" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.490045 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5308c07c-9d3d-4ead-8c6e-19c51adf5228" containerName="extract-utilities" Feb 16 00:10:30 crc kubenswrapper[4698]: E0216 00:10:30.490054 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9888dd66-5ef5-499a-92a8-c9fd32335a20" containerName="registry-server" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.491058 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9888dd66-5ef5-499a-92a8-c9fd32335a20" containerName="registry-server" Feb 16 00:10:30 crc kubenswrapper[4698]: E0216 00:10:30.491082 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9888dd66-5ef5-499a-92a8-c9fd32335a20" containerName="extract-content" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.491089 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9888dd66-5ef5-499a-92a8-c9fd32335a20" containerName="extract-content" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.491234 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9888dd66-5ef5-499a-92a8-c9fd32335a20" containerName="registry-server" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.491261 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="5308c07c-9d3d-4ead-8c6e-19c51adf5228" containerName="registry-server" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.491270 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ba3ac6-6e8d-4965-81f1-c1805efed27f" containerName="oauth-openshift" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.491841 4698 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492059 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492125 4698 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492189 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4" gracePeriod=15 Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492344 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30" gracePeriod=15 Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492383 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012" gracePeriod=15 Feb 16 00:10:30 crc kubenswrapper[4698]: E0216 00:10:30.492414 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492440 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492365 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401" gracePeriod=15 Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492456 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9" gracePeriod=15 Feb 16 00:10:30 crc kubenswrapper[4698]: E0216 00:10:30.492451 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492490 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 00:10:30 crc kubenswrapper[4698]: E0216 00:10:30.492515 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492523 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 00:10:30 crc kubenswrapper[4698]: E0216 00:10:30.492549 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492556 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 00:10:30 crc kubenswrapper[4698]: E0216 00:10:30.492575 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492583 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 00:10:30 crc kubenswrapper[4698]: E0216 00:10:30.492591 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492597 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 00:10:30 crc kubenswrapper[4698]: E0216 00:10:30.492605 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492628 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492856 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492874 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492886 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492894 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492902 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.492910 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.497827 4698 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.530061 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dclvg"] Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.533797 4698 scope.go:117] "RemoveContainer" containerID="183173e6d34a45c3a2fc8cde6a5e3c255c736a4595aa47eeb5b6ed633c95062a" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.535879 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dclvg"] Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.569071 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.569146 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.569187 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.569225 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.569268 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.569343 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.569392 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.569453 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: E0216 00:10:30.596182 4698 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.143:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.670628 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.670694 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.670730 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.670770 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.670801 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.670829 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.670850 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.670857 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.670893 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.670885 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.670893 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.670917 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.670931 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.670955 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.670946 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.671045 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: I0216 00:10:30.898726 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:10:30 crc kubenswrapper[4698]: E0216 00:10:30.942225 4698 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.143:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18949196f12f89d3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:10:30.940994003 +0000 UTC m=+240.598892785,LastTimestamp:2026-02-16 00:10:30.940994003 +0000 UTC m=+240.598892785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:10:31 crc kubenswrapper[4698]: I0216 00:10:31.245667 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9888dd66-5ef5-499a-92a8-c9fd32335a20" path="/var/lib/kubelet/pods/9888dd66-5ef5-499a-92a8-c9fd32335a20/volumes" Feb 16 00:10:31 crc kubenswrapper[4698]: I0216 00:10:31.458825 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 00:10:31 crc kubenswrapper[4698]: I0216 00:10:31.461368 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 00:10:31 crc kubenswrapper[4698]: I0216 00:10:31.462525 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30" exitCode=0 Feb 16 00:10:31 crc kubenswrapper[4698]: I0216 00:10:31.462551 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401" exitCode=0 Feb 16 00:10:31 crc kubenswrapper[4698]: I0216 00:10:31.462560 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012" exitCode=0 Feb 16 00:10:31 crc kubenswrapper[4698]: I0216 00:10:31.462569 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9" exitCode=2 Feb 16 00:10:31 crc kubenswrapper[4698]: I0216 00:10:31.462677 4698 scope.go:117] "RemoveContainer" containerID="c1789d7709b2795225d0ed0fc0c3be8c84a666babcb2cdf6900007ff8c50ff4f" Feb 16 00:10:31 crc kubenswrapper[4698]: I0216 00:10:31.468394 4698 generic.go:334] "Generic (PLEG): container finished" podID="b1897092-951e-4baa-b926-243514d4e981" containerID="89dbe9da5fd74c999953912da4cfe24835f865d86db46bb7cba3e6c5da3dbf72" exitCode=0 Feb 16 00:10:31 crc kubenswrapper[4698]: I0216 00:10:31.468510 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b1897092-951e-4baa-b926-243514d4e981","Type":"ContainerDied","Data":"89dbe9da5fd74c999953912da4cfe24835f865d86db46bb7cba3e6c5da3dbf72"} Feb 16 00:10:31 crc kubenswrapper[4698]: I0216 00:10:31.469315 4698 status_manager.go:851] "Failed to get status for pod" podUID="b1897092-951e-4baa-b926-243514d4e981" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[4698]: I0216 00:10:31.471244 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d9aecad2fbf2cd40ec0007a04c48fc92ef9b913d9e30847857d1646f06b4de28"} Feb 16 00:10:31 crc kubenswrapper[4698]: I0216 00:10:31.471324 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2fd1d6e8606f51d4181ee0b2aa92d332142d28cce8bed29570f1baa7648c8d0d"} Feb 16 00:10:31 crc kubenswrapper[4698]: E0216 00:10:31.472273 4698 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.143:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:10:31 crc kubenswrapper[4698]: I0216 00:10:31.473271 4698 status_manager.go:851] "Failed to get status for pod" podUID="b1897092-951e-4baa-b926-243514d4e981" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:32 crc kubenswrapper[4698]: I0216 00:10:32.484247 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 00:10:32 crc kubenswrapper[4698]: E0216 00:10:32.544064 4698 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:32 crc kubenswrapper[4698]: E0216 00:10:32.545230 4698 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:32 crc kubenswrapper[4698]: E0216 00:10:32.545844 4698 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:32 crc kubenswrapper[4698]: E0216 00:10:32.546384 4698 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:32 crc kubenswrapper[4698]: E0216 00:10:32.546899 4698 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:32 crc kubenswrapper[4698]: I0216 00:10:32.546969 4698 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 00:10:32 crc kubenswrapper[4698]: E0216 00:10:32.547515 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="200ms" Feb 16 00:10:32 crc kubenswrapper[4698]: E0216 00:10:32.748286 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="400ms" Feb 16 00:10:32 crc kubenswrapper[4698]: I0216 00:10:32.961515 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 00:10:32 crc kubenswrapper[4698]: I0216 00:10:32.963932 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 00:10:32 crc kubenswrapper[4698]: I0216 00:10:32.964080 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:32 crc kubenswrapper[4698]: I0216 00:10:32.965006 4698 status_manager.go:851] "Failed to get status for pod" podUID="b1897092-951e-4baa-b926-243514d4e981" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:32 crc kubenswrapper[4698]: I0216 00:10:32.965801 4698 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:32 crc kubenswrapper[4698]: I0216 00:10:32.966434 4698 status_manager.go:851] "Failed to get status for pod" podUID="b1897092-951e-4baa-b926-243514d4e981" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:32 crc kubenswrapper[4698]: I0216 00:10:32.967113 4698 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.106496 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1897092-951e-4baa-b926-243514d4e981-kube-api-access\") pod \"b1897092-951e-4baa-b926-243514d4e981\" (UID: \"b1897092-951e-4baa-b926-243514d4e981\") " Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.106738 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.106818 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.106871 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.107088 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1897092-951e-4baa-b926-243514d4e981-var-lock\") pod \"b1897092-951e-4baa-b926-243514d4e981\" (UID: \"b1897092-951e-4baa-b926-243514d4e981\") " Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.106958 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.106960 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.106989 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.107158 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1897092-951e-4baa-b926-243514d4e981-var-lock" (OuterVolumeSpecName: "var-lock") pod "b1897092-951e-4baa-b926-243514d4e981" (UID: "b1897092-951e-4baa-b926-243514d4e981"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.107410 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1897092-951e-4baa-b926-243514d4e981-kubelet-dir\") pod \"b1897092-951e-4baa-b926-243514d4e981\" (UID: \"b1897092-951e-4baa-b926-243514d4e981\") " Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.107487 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1897092-951e-4baa-b926-243514d4e981-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b1897092-951e-4baa-b926-243514d4e981" (UID: "b1897092-951e-4baa-b926-243514d4e981"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.107901 4698 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.107934 4698 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.107952 4698 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.107970 4698 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1897092-951e-4baa-b926-243514d4e981-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.107987 4698 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1897092-951e-4baa-b926-243514d4e981-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.117069 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1897092-951e-4baa-b926-243514d4e981-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b1897092-951e-4baa-b926-243514d4e981" (UID: "b1897092-951e-4baa-b926-243514d4e981"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:10:33 crc kubenswrapper[4698]: E0216 00:10:33.151279 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="800ms" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.208425 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1897092-951e-4baa-b926-243514d4e981-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.252299 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 16 00:10:33 crc kubenswrapper[4698]: E0216 00:10:33.309572 4698 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.143:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" volumeName="registry-storage" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.497875 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.499005 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4" exitCode=0 Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.499140 4698 scope.go:117] "RemoveContainer" containerID="35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.499143 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.500726 4698 status_manager.go:851] "Failed to get status for pod" podUID="b1897092-951e-4baa-b926-243514d4e981" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.501231 4698 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.503314 4698 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.503700 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b1897092-951e-4baa-b926-243514d4e981","Type":"ContainerDied","Data":"e896bef42bd57e3791625e78f3feb7aaffbbd78c2a5def6eb44a6de2ecf5ba4a"} Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.503730 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e896bef42bd57e3791625e78f3feb7aaffbbd78c2a5def6eb44a6de2ecf5ba4a" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.503796 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.503789 4698 status_manager.go:851] "Failed to get status for pod" podUID="b1897092-951e-4baa-b926-243514d4e981" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.509508 4698 status_manager.go:851] "Failed to get status for pod" podUID="b1897092-951e-4baa-b926-243514d4e981" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.510070 4698 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.528671 4698 scope.go:117] "RemoveContainer" containerID="64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.552207 4698 scope.go:117] "RemoveContainer" containerID="2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.574053 4698 scope.go:117] "RemoveContainer" containerID="11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.596996 4698 scope.go:117] "RemoveContainer" containerID="bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.614592 4698 scope.go:117] "RemoveContainer" containerID="b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.646602 4698 scope.go:117] "RemoveContainer" containerID="35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30" Feb 16 00:10:33 crc kubenswrapper[4698]: E0216 00:10:33.649868 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\": container with ID starting with 35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30 not found: ID does not exist" containerID="35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.649982 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30"} err="failed to get container status \"35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\": rpc error: code = NotFound desc = could not find container \"35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30\": container with ID starting with 35cf3382c5fd0093f7c84806787728433f58b5f3c9e399199f739d95738cda30 not found: ID does not exist" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.650176 4698 scope.go:117] "RemoveContainer" containerID="64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401" Feb 16 00:10:33 crc kubenswrapper[4698]: E0216 00:10:33.651092 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\": container with ID starting with 64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401 not found: ID does not exist" containerID="64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.651141 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401"} err="failed to get container status \"64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\": rpc error: code = NotFound desc = could not find container \"64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401\": container with ID starting with 64f4baf910cb5881ed6268e7f56068c7f4fda600e5047175ec38c687308ac401 not found: ID does not exist" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.651173 4698 scope.go:117] "RemoveContainer" containerID="2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012" Feb 16 00:10:33 crc kubenswrapper[4698]: E0216 00:10:33.652789 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\": container with ID starting with 2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012 not found: ID does not exist" containerID="2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.652830 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012"} err="failed to get container status \"2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\": rpc error: code = NotFound desc = could not find container \"2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012\": container with ID starting with 2787db2159756509bfe6e45b5b2446bd494901ccc74a02dc406747e0a3349012 not found: ID does not exist" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.652855 4698 scope.go:117] "RemoveContainer" containerID="11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9" Feb 16 00:10:33 crc kubenswrapper[4698]: E0216 00:10:33.653294 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\": container with ID starting with 11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9 not found: ID does not exist" containerID="11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.653396 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9"} err="failed to get container status \"11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\": rpc error: code = NotFound desc = could not find container \"11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9\": container with ID starting with 11c581669dca2ff5b05352a7d5f7ab5e1c930244a624ffa7722b6480b3ecdde9 not found: ID does not exist" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.653458 4698 scope.go:117] "RemoveContainer" containerID="bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4" Feb 16 00:10:33 crc kubenswrapper[4698]: E0216 00:10:33.653966 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\": container with ID starting with bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4 not found: ID does not exist" containerID="bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.653995 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4"} err="failed to get container status \"bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\": rpc error: code = NotFound desc = could not find container \"bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4\": container with ID starting with bbc6846d5f450aae74c7a50a9511757326aafcbaf5f779158b2e24107b42ffb4 not found: ID does not exist" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.654016 4698 scope.go:117] "RemoveContainer" containerID="b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4" Feb 16 00:10:33 crc kubenswrapper[4698]: E0216 00:10:33.654429 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\": container with ID starting with b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4 not found: ID does not exist" containerID="b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4" Feb 16 00:10:33 crc kubenswrapper[4698]: I0216 00:10:33.654468 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4"} err="failed to get container status \"b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\": rpc error: code = NotFound desc = could not find container \"b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4\": container with ID starting with b37802f4327460fc316d1b2a3fd59851008b1b02ad398ae81592f35bf60cc9d4 not found: ID does not exist" Feb 16 00:10:33 crc kubenswrapper[4698]: E0216 00:10:33.952250 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="1.6s" Feb 16 00:10:35 crc kubenswrapper[4698]: E0216 00:10:35.554182 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="3.2s" Feb 16 00:10:38 crc kubenswrapper[4698]: E0216 00:10:38.755293 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="6.4s" Feb 16 00:10:39 crc kubenswrapper[4698]: E0216 00:10:39.007859 4698 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.143:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18949196f12f89d3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:10:30.940994003 +0000 UTC m=+240.598892785,LastTimestamp:2026-02-16 00:10:30.940994003 +0000 UTC m=+240.598892785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:10:41 crc kubenswrapper[4698]: I0216 00:10:41.231309 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:41 crc kubenswrapper[4698]: I0216 00:10:41.236998 4698 status_manager.go:851] "Failed to get status for pod" podUID="b1897092-951e-4baa-b926-243514d4e981" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:41 crc kubenswrapper[4698]: I0216 00:10:41.237413 4698 status_manager.go:851] "Failed to get status for pod" podUID="b1897092-951e-4baa-b926-243514d4e981" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:41 crc kubenswrapper[4698]: I0216 00:10:41.262896 4698 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ca29dec-0622-4036-b9b7-9d1ab05147df" Feb 16 00:10:41 crc kubenswrapper[4698]: I0216 00:10:41.262951 4698 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ca29dec-0622-4036-b9b7-9d1ab05147df" Feb 16 00:10:41 crc kubenswrapper[4698]: E0216 00:10:41.263547 4698 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:41 crc kubenswrapper[4698]: I0216 00:10:41.264204 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:41 crc kubenswrapper[4698]: I0216 00:10:41.562687 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"087015bd05d0176ad001bdf7688b10a4e79a28a3a5d06f8dbad9283fd954034b"} Feb 16 00:10:42 crc kubenswrapper[4698]: I0216 00:10:42.572263 4698 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="335f3c5a28c67194050efd94c524007babcc22fcdbfb70f126c13824522dc431" exitCode=0 Feb 16 00:10:42 crc kubenswrapper[4698]: I0216 00:10:42.572829 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"335f3c5a28c67194050efd94c524007babcc22fcdbfb70f126c13824522dc431"} Feb 16 00:10:42 crc kubenswrapper[4698]: I0216 00:10:42.573381 4698 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ca29dec-0622-4036-b9b7-9d1ab05147df" Feb 16 00:10:42 crc kubenswrapper[4698]: I0216 00:10:42.573417 4698 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ca29dec-0622-4036-b9b7-9d1ab05147df" Feb 16 00:10:42 crc kubenswrapper[4698]: E0216 00:10:42.573960 4698 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:42 crc kubenswrapper[4698]: I0216 00:10:42.574118 4698 status_manager.go:851] "Failed to get status for pod" podUID="b1897092-951e-4baa-b926-243514d4e981" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Feb 16 00:10:43 crc kubenswrapper[4698]: I0216 00:10:43.584931 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a773efba68196ac1309b332023a2b0686da96c1da4de4fcc8d49a996b03f5567"} Feb 16 00:10:43 crc kubenswrapper[4698]: I0216 00:10:43.585224 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"52510a48a7e31f33eee8b83a97bcfe9659e6cc5e5d38a0d1fff8fe37037976f3"} Feb 16 00:10:43 crc kubenswrapper[4698]: I0216 00:10:43.585238 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7683419efc8c6db3f9fe656a3112cb1da467f2c8209b582b0d26f992c65df9cf"} Feb 16 00:10:44 crc kubenswrapper[4698]: I0216 00:10:44.593402 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"eea6fb81191cd7d211d12985273431b3c40531edbf161bbb5b782bef315fbcfe"} Feb 16 00:10:44 crc kubenswrapper[4698]: I0216 00:10:44.593763 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"93ad89e651bbbda8febedd52cbb4b735e768c47f0e29b0135607fd7212511160"} Feb 16 00:10:44 crc kubenswrapper[4698]: I0216 00:10:44.593998 4698 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ca29dec-0622-4036-b9b7-9d1ab05147df" Feb 16 00:10:44 crc kubenswrapper[4698]: I0216 00:10:44.594013 4698 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ca29dec-0622-4036-b9b7-9d1ab05147df" Feb 16 00:10:44 crc kubenswrapper[4698]: I0216 00:10:44.594225 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:45 crc kubenswrapper[4698]: I0216 00:10:45.602023 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 00:10:45 crc kubenswrapper[4698]: I0216 00:10:45.602080 4698 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493" exitCode=1 Feb 16 00:10:45 crc kubenswrapper[4698]: I0216 00:10:45.602115 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493"} Feb 16 00:10:45 crc kubenswrapper[4698]: I0216 00:10:45.602660 4698 scope.go:117] "RemoveContainer" containerID="50c2d4e50250fc414b29c16d05de0f649298cffbae6649e4d7b0ce1436ac4493" Feb 16 00:10:46 crc kubenswrapper[4698]: I0216 00:10:46.264898 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:46 crc kubenswrapper[4698]: I0216 00:10:46.265451 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:46 crc kubenswrapper[4698]: I0216 00:10:46.274971 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:46 crc kubenswrapper[4698]: I0216 00:10:46.445699 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:10:46 crc kubenswrapper[4698]: I0216 00:10:46.615495 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 00:10:46 crc kubenswrapper[4698]: I0216 00:10:46.615558 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0177fdf05091e0f01c540e187175eac65fcd3cda6887ab98b6c7c15842e567fa"} Feb 16 00:10:48 crc kubenswrapper[4698]: I0216 00:10:48.853460 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:10:49 crc kubenswrapper[4698]: I0216 00:10:49.602524 4698 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:50 crc kubenswrapper[4698]: I0216 00:10:50.643607 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-check-endpoints/0.log" Feb 16 00:10:50 crc kubenswrapper[4698]: I0216 00:10:50.646599 4698 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="eea6fb81191cd7d211d12985273431b3c40531edbf161bbb5b782bef315fbcfe" exitCode=255 Feb 16 00:10:50 crc kubenswrapper[4698]: I0216 00:10:50.646683 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"eea6fb81191cd7d211d12985273431b3c40531edbf161bbb5b782bef315fbcfe"} Feb 16 00:10:50 crc kubenswrapper[4698]: I0216 00:10:50.647183 4698 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ca29dec-0622-4036-b9b7-9d1ab05147df" Feb 16 00:10:50 crc kubenswrapper[4698]: I0216 00:10:50.647214 4698 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ca29dec-0622-4036-b9b7-9d1ab05147df" Feb 16 00:10:50 crc kubenswrapper[4698]: I0216 00:10:50.651852 4698 scope.go:117] "RemoveContainer" containerID="eea6fb81191cd7d211d12985273431b3c40531edbf161bbb5b782bef315fbcfe" Feb 16 00:10:50 crc kubenswrapper[4698]: I0216 00:10:50.654958 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:51 crc kubenswrapper[4698]: I0216 00:10:51.246189 4698 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0060507c-ca32-436c-a7e3-7c5944fb447c" Feb 16 00:10:51 crc kubenswrapper[4698]: I0216 00:10:51.655336 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-check-endpoints/0.log" Feb 16 00:10:51 crc kubenswrapper[4698]: I0216 00:10:51.657764 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9917c62fecc1042d97cd7aa5aba94d79fe458def89491948e025aaa640d56102"} Feb 16 00:10:51 crc kubenswrapper[4698]: I0216 00:10:51.658014 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:51 crc kubenswrapper[4698]: I0216 00:10:51.658100 4698 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ca29dec-0622-4036-b9b7-9d1ab05147df" Feb 16 00:10:51 crc kubenswrapper[4698]: I0216 00:10:51.658125 4698 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ca29dec-0622-4036-b9b7-9d1ab05147df" Feb 16 00:10:51 crc kubenswrapper[4698]: I0216 00:10:51.662312 4698 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0060507c-ca32-436c-a7e3-7c5944fb447c" Feb 16 00:10:52 crc kubenswrapper[4698]: I0216 00:10:52.663898 4698 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ca29dec-0622-4036-b9b7-9d1ab05147df" Feb 16 00:10:52 crc kubenswrapper[4698]: I0216 00:10:52.664187 4698 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ca29dec-0622-4036-b9b7-9d1ab05147df" Feb 16 00:10:52 crc kubenswrapper[4698]: I0216 00:10:52.667471 4698 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0060507c-ca32-436c-a7e3-7c5944fb447c" Feb 16 00:10:56 crc kubenswrapper[4698]: I0216 00:10:56.444874 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:10:56 crc kubenswrapper[4698]: I0216 00:10:56.453044 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:10:56 crc kubenswrapper[4698]: I0216 00:10:56.695608 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:10:57 crc kubenswrapper[4698]: I0216 00:10:57.086038 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 00:11:00 crc kubenswrapper[4698]: I0216 00:11:00.800582 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 00:11:00 crc kubenswrapper[4698]: I0216 00:11:00.897892 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 00:11:00 crc kubenswrapper[4698]: I0216 00:11:00.918134 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 00:11:01 crc kubenswrapper[4698]: I0216 00:11:01.104305 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 00:11:01 crc kubenswrapper[4698]: I0216 00:11:01.193577 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 00:11:01 crc kubenswrapper[4698]: I0216 00:11:01.488530 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 00:11:01 crc kubenswrapper[4698]: I0216 00:11:01.681920 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 00:11:01 crc kubenswrapper[4698]: I0216 00:11:01.756157 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 00:11:01 crc kubenswrapper[4698]: I0216 00:11:01.856255 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 00:11:02 crc kubenswrapper[4698]: I0216 00:11:02.088852 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 00:11:02 crc kubenswrapper[4698]: I0216 00:11:02.152830 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 00:11:02 crc kubenswrapper[4698]: I0216 00:11:02.347474 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 00:11:02 crc kubenswrapper[4698]: I0216 00:11:02.588152 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 00:11:02 crc kubenswrapper[4698]: I0216 00:11:02.751745 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 00:11:02 crc kubenswrapper[4698]: I0216 00:11:02.801311 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 00:11:03 crc kubenswrapper[4698]: I0216 00:11:03.033038 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 00:11:03 crc kubenswrapper[4698]: I0216 00:11:03.102293 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 00:11:03 crc kubenswrapper[4698]: I0216 00:11:03.201199 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 00:11:03 crc kubenswrapper[4698]: I0216 00:11:03.559688 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 00:11:03 crc kubenswrapper[4698]: I0216 00:11:03.569208 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 00:11:03 crc kubenswrapper[4698]: I0216 00:11:03.863582 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 00:11:03 crc kubenswrapper[4698]: I0216 00:11:03.899479 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 00:11:04 crc kubenswrapper[4698]: I0216 00:11:04.064805 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 00:11:04 crc kubenswrapper[4698]: I0216 00:11:04.100037 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 00:11:04 crc kubenswrapper[4698]: I0216 00:11:04.104504 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 00:11:04 crc kubenswrapper[4698]: I0216 00:11:04.139012 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 00:11:04 crc kubenswrapper[4698]: I0216 00:11:04.338158 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 00:11:04 crc kubenswrapper[4698]: I0216 00:11:04.431815 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 00:11:04 crc kubenswrapper[4698]: I0216 00:11:04.513489 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 00:11:04 crc kubenswrapper[4698]: I0216 00:11:04.753356 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 00:11:04 crc kubenswrapper[4698]: I0216 00:11:04.870039 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 00:11:04 crc kubenswrapper[4698]: I0216 00:11:04.921806 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 00:11:04 crc kubenswrapper[4698]: I0216 00:11:04.953300 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 00:11:05 crc kubenswrapper[4698]: I0216 00:11:05.014838 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 00:11:05 crc kubenswrapper[4698]: I0216 00:11:05.091378 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 00:11:05 crc kubenswrapper[4698]: I0216 00:11:05.194223 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 00:11:05 crc kubenswrapper[4698]: I0216 00:11:05.478586 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 00:11:05 crc kubenswrapper[4698]: I0216 00:11:05.562084 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 00:11:05 crc kubenswrapper[4698]: I0216 00:11:05.599279 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 00:11:05 crc kubenswrapper[4698]: I0216 00:11:05.610660 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 00:11:05 crc kubenswrapper[4698]: I0216 00:11:05.617233 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 00:11:05 crc kubenswrapper[4698]: I0216 00:11:05.642200 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 00:11:05 crc kubenswrapper[4698]: I0216 00:11:05.711752 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 00:11:05 crc kubenswrapper[4698]: I0216 00:11:05.788123 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 00:11:05 crc kubenswrapper[4698]: I0216 00:11:05.805477 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 00:11:05 crc kubenswrapper[4698]: I0216 00:11:05.842029 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 00:11:05 crc kubenswrapper[4698]: I0216 00:11:05.854006 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 00:11:05 crc kubenswrapper[4698]: I0216 00:11:05.954513 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.033180 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.046582 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.182176 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.197718 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.203055 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.231230 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.238425 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.273239 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.283215 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.294181 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.310195 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.314440 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.540909 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.659127 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.701524 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.755116 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.767578 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.776187 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.881030 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 00:11:06 crc kubenswrapper[4698]: I0216 00:11:06.923538 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 00:11:07 crc kubenswrapper[4698]: I0216 00:11:07.022130 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 00:11:07 crc kubenswrapper[4698]: I0216 00:11:07.023894 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 00:11:07 crc kubenswrapper[4698]: I0216 00:11:07.151965 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 00:11:07 crc kubenswrapper[4698]: I0216 00:11:07.173200 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 00:11:07 crc kubenswrapper[4698]: I0216 00:11:07.350479 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 00:11:07 crc kubenswrapper[4698]: I0216 00:11:07.351106 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 00:11:07 crc kubenswrapper[4698]: I0216 00:11:07.430178 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 00:11:07 crc kubenswrapper[4698]: I0216 00:11:07.461686 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 00:11:07 crc kubenswrapper[4698]: I0216 00:11:07.543813 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 00:11:07 crc kubenswrapper[4698]: I0216 00:11:07.703725 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 00:11:07 crc kubenswrapper[4698]: I0216 00:11:07.743493 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 00:11:07 crc kubenswrapper[4698]: I0216 00:11:07.790369 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 00:11:07 crc kubenswrapper[4698]: I0216 00:11:07.863684 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 00:11:07 crc kubenswrapper[4698]: I0216 00:11:07.901149 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 00:11:07 crc kubenswrapper[4698]: I0216 00:11:07.945529 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 00:11:07 crc kubenswrapper[4698]: I0216 00:11:07.982402 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.019215 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.024769 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.141236 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.149190 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.218875 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.230542 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.336300 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.373447 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.430547 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.475515 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.525753 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.575569 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.612984 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.616019 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.616025 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.633153 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.720184 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.793284 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.820388 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.847561 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.979278 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 00:11:08 crc kubenswrapper[4698]: I0216 00:11:08.989754 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 00:11:09 crc kubenswrapper[4698]: I0216 00:11:09.045014 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 00:11:09 crc kubenswrapper[4698]: I0216 00:11:09.052146 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 00:11:09 crc kubenswrapper[4698]: I0216 00:11:09.080594 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 00:11:09 crc kubenswrapper[4698]: I0216 00:11:09.178158 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 00:11:09 crc kubenswrapper[4698]: I0216 00:11:09.270888 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 00:11:09 crc kubenswrapper[4698]: I0216 00:11:09.271404 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 00:11:09 crc kubenswrapper[4698]: I0216 00:11:09.457722 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 00:11:09 crc kubenswrapper[4698]: I0216 00:11:09.558306 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 00:11:09 crc kubenswrapper[4698]: I0216 00:11:09.631461 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 00:11:09 crc kubenswrapper[4698]: I0216 00:11:09.796707 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 00:11:09 crc kubenswrapper[4698]: I0216 00:11:09.975055 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 00:11:10 crc kubenswrapper[4698]: I0216 00:11:10.004603 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 00:11:10 crc kubenswrapper[4698]: I0216 00:11:10.059164 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 00:11:10 crc kubenswrapper[4698]: I0216 00:11:10.067172 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 00:11:10 crc kubenswrapper[4698]: I0216 00:11:10.083569 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 00:11:10 crc kubenswrapper[4698]: I0216 00:11:10.131245 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 00:11:10 crc kubenswrapper[4698]: I0216 00:11:10.149766 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 00:11:10 crc kubenswrapper[4698]: I0216 00:11:10.175387 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 00:11:10 crc kubenswrapper[4698]: I0216 00:11:10.413197 4698 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 00:11:10 crc kubenswrapper[4698]: I0216 00:11:10.480178 4698 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 00:11:10 crc kubenswrapper[4698]: I0216 00:11:10.525448 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 00:11:10 crc kubenswrapper[4698]: I0216 00:11:10.543054 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 00:11:10 crc kubenswrapper[4698]: I0216 00:11:10.708826 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 00:11:10 crc kubenswrapper[4698]: I0216 00:11:10.800246 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 00:11:10 crc kubenswrapper[4698]: I0216 00:11:10.829392 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 00:11:10 crc kubenswrapper[4698]: I0216 00:11:10.878539 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 00:11:10 crc kubenswrapper[4698]: I0216 00:11:10.890875 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 00:11:10 crc kubenswrapper[4698]: I0216 00:11:10.948288 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 00:11:10 crc kubenswrapper[4698]: I0216 00:11:10.970894 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 00:11:11 crc kubenswrapper[4698]: I0216 00:11:11.136207 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 00:11:11 crc kubenswrapper[4698]: I0216 00:11:11.239445 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 00:11:11 crc kubenswrapper[4698]: I0216 00:11:11.279341 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 00:11:11 crc kubenswrapper[4698]: I0216 00:11:11.291434 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 00:11:11 crc kubenswrapper[4698]: I0216 00:11:11.396658 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 00:11:11 crc kubenswrapper[4698]: I0216 00:11:11.420553 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 00:11:11 crc kubenswrapper[4698]: I0216 00:11:11.465552 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 00:11:11 crc kubenswrapper[4698]: I0216 00:11:11.514424 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 00:11:11 crc kubenswrapper[4698]: I0216 00:11:11.551481 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 00:11:11 crc kubenswrapper[4698]: I0216 00:11:11.592098 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 00:11:11 crc kubenswrapper[4698]: I0216 00:11:11.658320 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 00:11:11 crc kubenswrapper[4698]: I0216 00:11:11.660690 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 00:11:11 crc kubenswrapper[4698]: I0216 00:11:11.712125 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 00:11:11 crc kubenswrapper[4698]: I0216 00:11:11.732233 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 00:11:11 crc kubenswrapper[4698]: I0216 00:11:11.736495 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 00:11:11 crc kubenswrapper[4698]: I0216 00:11:11.756658 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 00:11:11 crc kubenswrapper[4698]: I0216 00:11:11.761115 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 00:11:11 crc kubenswrapper[4698]: I0216 00:11:11.961380 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 00:11:11 crc kubenswrapper[4698]: I0216 00:11:11.965782 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 00:11:12 crc kubenswrapper[4698]: I0216 00:11:12.036944 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 00:11:12 crc kubenswrapper[4698]: I0216 00:11:12.058162 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 00:11:12 crc kubenswrapper[4698]: I0216 00:11:12.073334 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 00:11:12 crc kubenswrapper[4698]: I0216 00:11:12.129560 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 00:11:12 crc kubenswrapper[4698]: I0216 00:11:12.221245 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 00:11:12 crc kubenswrapper[4698]: I0216 00:11:12.222532 4698 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 00:11:12 crc kubenswrapper[4698]: I0216 00:11:12.313740 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 00:11:12 crc kubenswrapper[4698]: I0216 00:11:12.435591 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 00:11:12 crc kubenswrapper[4698]: I0216 00:11:12.596139 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 00:11:12 crc kubenswrapper[4698]: I0216 00:11:12.682377 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 00:11:12 crc kubenswrapper[4698]: I0216 00:11:12.716344 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 00:11:12 crc kubenswrapper[4698]: I0216 00:11:12.791166 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 00:11:12 crc kubenswrapper[4698]: I0216 00:11:12.896291 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 00:11:12 crc kubenswrapper[4698]: I0216 00:11:12.912967 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 00:11:12 crc kubenswrapper[4698]: I0216 00:11:12.997208 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 00:11:13 crc kubenswrapper[4698]: I0216 00:11:13.031035 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 00:11:13 crc kubenswrapper[4698]: I0216 00:11:13.044896 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 00:11:13 crc kubenswrapper[4698]: I0216 00:11:13.048760 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 00:11:13 crc kubenswrapper[4698]: I0216 00:11:13.100680 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 00:11:13 crc kubenswrapper[4698]: I0216 00:11:13.284068 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 00:11:13 crc kubenswrapper[4698]: I0216 00:11:13.590287 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 00:11:13 crc kubenswrapper[4698]: I0216 00:11:13.614579 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 00:11:13 crc kubenswrapper[4698]: I0216 00:11:13.640199 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 00:11:13 crc kubenswrapper[4698]: I0216 00:11:13.650694 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 00:11:13 crc kubenswrapper[4698]: I0216 00:11:13.831335 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 00:11:13 crc kubenswrapper[4698]: I0216 00:11:13.871024 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 00:11:13 crc kubenswrapper[4698]: I0216 00:11:13.944993 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 00:11:13 crc kubenswrapper[4698]: I0216 00:11:13.948211 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 00:11:13 crc kubenswrapper[4698]: I0216 00:11:13.958138 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.045894 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.094239 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.166602 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.185184 4698 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.195074 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.195162 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5","openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.196120 4698 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ca29dec-0622-4036-b9b7-9d1ab05147df" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.196299 4698 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ca29dec-0622-4036-b9b7-9d1ab05147df" Feb 16 00:11:14 crc kubenswrapper[4698]: E0216 00:11:14.201806 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1897092-951e-4baa-b926-243514d4e981" containerName="installer" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.201846 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1897092-951e-4baa-b926-243514d4e981" containerName="installer" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.202062 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1897092-951e-4baa-b926-243514d4e981" containerName="installer" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.203064 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.203086 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.209867 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.210172 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.210357 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.210425 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.210452 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.210809 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.211355 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.211388 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.212175 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.212332 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.212837 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.212972 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.220849 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.222691 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.233888 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.254170 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.254267 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-service-ca\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.254295 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-audit-policies\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.254314 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-router-certs\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.254332 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-user-template-login\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.254357 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-audit-dir\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.254374 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.254406 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.254431 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.254449 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-session\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.254475 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-user-template-error\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.254496 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqhrw\" (UniqueName: \"kubernetes.io/projected/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-kube-api-access-mqhrw\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.254514 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.254541 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.264335 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.270452 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.286121 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.355786 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.355858 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.355895 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.355918 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-service-ca\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.355940 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-audit-policies\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.355962 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-router-certs\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.355985 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-user-template-login\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.356013 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-audit-dir\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.356030 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.356062 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.356082 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.356104 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-session\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.356128 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-user-template-error\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.356153 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqhrw\" (UniqueName: \"kubernetes.io/projected/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-kube-api-access-mqhrw\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.357270 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.357547 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-service-ca\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.357783 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.358184 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-audit-policies\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.361689 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-audit-dir\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.377538 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.377999 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.380232 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.383249 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.383821 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-user-template-error\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.386400 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-user-template-login\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.386574 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqhrw\" (UniqueName: \"kubernetes.io/projected/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-kube-api-access-mqhrw\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.390601 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-router-certs\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.395111 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a579b55d-8dee-452a-ac0f-e1e15e4e7a6f-v4-0-config-system-session\") pod \"oauth-openshift-9b46ffd8b-lv5k5\" (UID: \"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.399171 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.532817 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.769759 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.831200 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.892197 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.913023 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.914124 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.990679 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=25.990598739 podStartE2EDuration="25.990598739s" podCreationTimestamp="2026-02-16 00:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:14.279846718 +0000 UTC m=+283.937745480" watchObservedRunningTime="2026-02-16 00:11:14.990598739 +0000 UTC m=+284.648497501" Feb 16 00:11:14 crc kubenswrapper[4698]: I0216 00:11:14.997640 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5"] Feb 16 00:11:15 crc kubenswrapper[4698]: I0216 00:11:15.099442 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 00:11:15 crc kubenswrapper[4698]: I0216 00:11:15.123983 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 00:11:15 crc kubenswrapper[4698]: I0216 00:11:15.172050 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 00:11:15 crc kubenswrapper[4698]: I0216 00:11:15.205500 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 00:11:15 crc kubenswrapper[4698]: I0216 00:11:15.218294 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 00:11:15 crc kubenswrapper[4698]: I0216 00:11:15.340912 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 00:11:15 crc kubenswrapper[4698]: I0216 00:11:15.349329 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 00:11:15 crc kubenswrapper[4698]: I0216 00:11:15.434288 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 00:11:15 crc kubenswrapper[4698]: I0216 00:11:15.456206 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 00:11:15 crc kubenswrapper[4698]: I0216 00:11:15.581557 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 00:11:15 crc kubenswrapper[4698]: I0216 00:11:15.655255 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 00:11:15 crc kubenswrapper[4698]: I0216 00:11:15.803355 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 00:11:15 crc kubenswrapper[4698]: I0216 00:11:15.807273 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" event={"ID":"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f","Type":"ContainerStarted","Data":"8763659a9329f8ba4b6f186d9ca7df4f61a156bdaea3f83e2e9685da19a6efef"} Feb 16 00:11:15 crc kubenswrapper[4698]: I0216 00:11:15.807381 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" event={"ID":"a579b55d-8dee-452a-ac0f-e1e15e4e7a6f","Type":"ContainerStarted","Data":"ff103fd9c1b47bbba160b195274f70140f5b029bd577c358cc039e725af17e39"} Feb 16 00:11:15 crc kubenswrapper[4698]: I0216 00:11:15.807992 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:15 crc kubenswrapper[4698]: I0216 00:11:15.852379 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 00:11:15 crc kubenswrapper[4698]: I0216 00:11:15.852554 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" podStartSLOduration=73.852527407 podStartE2EDuration="1m13.852527407s" podCreationTimestamp="2026-02-16 00:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:15.851631839 +0000 UTC m=+285.509530631" watchObservedRunningTime="2026-02-16 00:11:15.852527407 +0000 UTC m=+285.510426169" Feb 16 00:11:15 crc kubenswrapper[4698]: I0216 00:11:15.945191 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-9b46ffd8b-lv5k5" Feb 16 00:11:16 crc kubenswrapper[4698]: I0216 00:11:16.006706 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 00:11:16 crc kubenswrapper[4698]: I0216 00:11:16.021416 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 00:11:16 crc kubenswrapper[4698]: I0216 00:11:16.100910 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 00:11:16 crc kubenswrapper[4698]: I0216 00:11:16.119714 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 00:11:16 crc kubenswrapper[4698]: I0216 00:11:16.123953 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 00:11:16 crc kubenswrapper[4698]: I0216 00:11:16.181391 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 00:11:16 crc kubenswrapper[4698]: I0216 00:11:16.322875 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 00:11:16 crc kubenswrapper[4698]: I0216 00:11:16.361535 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 00:11:16 crc kubenswrapper[4698]: I0216 00:11:16.390089 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 00:11:16 crc kubenswrapper[4698]: I0216 00:11:16.462356 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 00:11:16 crc kubenswrapper[4698]: I0216 00:11:16.598261 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 00:11:16 crc kubenswrapper[4698]: I0216 00:11:16.626974 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 00:11:16 crc kubenswrapper[4698]: I0216 00:11:16.632984 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 00:11:16 crc kubenswrapper[4698]: I0216 00:11:16.725323 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 00:11:16 crc kubenswrapper[4698]: I0216 00:11:16.725428 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 00:11:16 crc kubenswrapper[4698]: I0216 00:11:16.801472 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 00:11:16 crc kubenswrapper[4698]: I0216 00:11:16.803714 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 00:11:17 crc kubenswrapper[4698]: I0216 00:11:17.146888 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 00:11:17 crc kubenswrapper[4698]: I0216 00:11:17.215657 4698 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 00:11:17 crc kubenswrapper[4698]: I0216 00:11:17.652896 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 00:11:17 crc kubenswrapper[4698]: I0216 00:11:17.662149 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 00:11:17 crc kubenswrapper[4698]: I0216 00:11:17.691349 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 00:11:18 crc kubenswrapper[4698]: I0216 00:11:18.047853 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 00:11:18 crc kubenswrapper[4698]: I0216 00:11:18.174174 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 00:11:18 crc kubenswrapper[4698]: I0216 00:11:18.339637 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 00:11:18 crc kubenswrapper[4698]: I0216 00:11:18.347723 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 00:11:19 crc kubenswrapper[4698]: I0216 00:11:19.434195 4698 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.372677 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55f66f7ccb-wxg45"] Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.373062 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" podUID="74a0358a-8c91-42a5-9763-83f17e4fd05d" containerName="controller-manager" containerID="cri-o://77428bdc910e384129280ce7b040437109b263cf2f74ce66a52cd01f387940c1" gracePeriod=30 Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.465647 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9"] Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.466865 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" podUID="ae4b48ff-e232-4503-94ef-8acfdf2479f2" containerName="route-controller-manager" containerID="cri-o://cb02c744cadf5aca9a2d93dc156a2001de075d82f5b9609557610d270c021b3c" gracePeriod=30 Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.795640 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.859320 4698 generic.go:334] "Generic (PLEG): container finished" podID="74a0358a-8c91-42a5-9763-83f17e4fd05d" containerID="77428bdc910e384129280ce7b040437109b263cf2f74ce66a52cd01f387940c1" exitCode=0 Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.859403 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.859412 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" event={"ID":"74a0358a-8c91-42a5-9763-83f17e4fd05d","Type":"ContainerDied","Data":"77428bdc910e384129280ce7b040437109b263cf2f74ce66a52cd01f387940c1"} Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.859506 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55f66f7ccb-wxg45" event={"ID":"74a0358a-8c91-42a5-9763-83f17e4fd05d","Type":"ContainerDied","Data":"a9e127b30e87800980444509b875027fa4a8187f57e67379a67397a80f8b9941"} Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.859533 4698 scope.go:117] "RemoveContainer" containerID="77428bdc910e384129280ce7b040437109b263cf2f74ce66a52cd01f387940c1" Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.861645 4698 generic.go:334] "Generic (PLEG): container finished" podID="ae4b48ff-e232-4503-94ef-8acfdf2479f2" containerID="cb02c744cadf5aca9a2d93dc156a2001de075d82f5b9609557610d270c021b3c" exitCode=0 Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.861690 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" event={"ID":"ae4b48ff-e232-4503-94ef-8acfdf2479f2","Type":"ContainerDied","Data":"cb02c744cadf5aca9a2d93dc156a2001de075d82f5b9609557610d270c021b3c"} Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.861732 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" event={"ID":"ae4b48ff-e232-4503-94ef-8acfdf2479f2","Type":"ContainerDied","Data":"930db59cbb3e6bcd8a0923b97786a14f494d95a51a2eea0db1fc424f07684088"} Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.861745 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="930db59cbb3e6bcd8a0923b97786a14f494d95a51a2eea0db1fc424f07684088" Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.867553 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.881467 4698 scope.go:117] "RemoveContainer" containerID="77428bdc910e384129280ce7b040437109b263cf2f74ce66a52cd01f387940c1" Feb 16 00:11:22 crc kubenswrapper[4698]: E0216 00:11:22.881972 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77428bdc910e384129280ce7b040437109b263cf2f74ce66a52cd01f387940c1\": container with ID starting with 77428bdc910e384129280ce7b040437109b263cf2f74ce66a52cd01f387940c1 not found: ID does not exist" containerID="77428bdc910e384129280ce7b040437109b263cf2f74ce66a52cd01f387940c1" Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.882012 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77428bdc910e384129280ce7b040437109b263cf2f74ce66a52cd01f387940c1"} err="failed to get container status \"77428bdc910e384129280ce7b040437109b263cf2f74ce66a52cd01f387940c1\": rpc error: code = NotFound desc = could not find container \"77428bdc910e384129280ce7b040437109b263cf2f74ce66a52cd01f387940c1\": container with ID starting with 77428bdc910e384129280ce7b040437109b263cf2f74ce66a52cd01f387940c1 not found: ID does not exist" Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.994542 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74a0358a-8c91-42a5-9763-83f17e4fd05d-config\") pod \"74a0358a-8c91-42a5-9763-83f17e4fd05d\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.994605 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdkps\" (UniqueName: \"kubernetes.io/projected/ae4b48ff-e232-4503-94ef-8acfdf2479f2-kube-api-access-xdkps\") pod \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\" (UID: \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\") " Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.994686 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74a0358a-8c91-42a5-9763-83f17e4fd05d-client-ca\") pod \"74a0358a-8c91-42a5-9763-83f17e4fd05d\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.994736 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbvjh\" (UniqueName: \"kubernetes.io/projected/74a0358a-8c91-42a5-9763-83f17e4fd05d-kube-api-access-jbvjh\") pod \"74a0358a-8c91-42a5-9763-83f17e4fd05d\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.994763 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74a0358a-8c91-42a5-9763-83f17e4fd05d-proxy-ca-bundles\") pod \"74a0358a-8c91-42a5-9763-83f17e4fd05d\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.994798 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae4b48ff-e232-4503-94ef-8acfdf2479f2-config\") pod \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\" (UID: \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\") " Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.994849 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae4b48ff-e232-4503-94ef-8acfdf2479f2-serving-cert\") pod \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\" (UID: \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\") " Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.994868 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74a0358a-8c91-42a5-9763-83f17e4fd05d-serving-cert\") pod \"74a0358a-8c91-42a5-9763-83f17e4fd05d\" (UID: \"74a0358a-8c91-42a5-9763-83f17e4fd05d\") " Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.994904 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae4b48ff-e232-4503-94ef-8acfdf2479f2-client-ca\") pod \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\" (UID: \"ae4b48ff-e232-4503-94ef-8acfdf2479f2\") " Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.995909 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74a0358a-8c91-42a5-9763-83f17e4fd05d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "74a0358a-8c91-42a5-9763-83f17e4fd05d" (UID: "74a0358a-8c91-42a5-9763-83f17e4fd05d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.995942 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae4b48ff-e232-4503-94ef-8acfdf2479f2-client-ca" (OuterVolumeSpecName: "client-ca") pod "ae4b48ff-e232-4503-94ef-8acfdf2479f2" (UID: "ae4b48ff-e232-4503-94ef-8acfdf2479f2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.996312 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74a0358a-8c91-42a5-9763-83f17e4fd05d-config" (OuterVolumeSpecName: "config") pod "74a0358a-8c91-42a5-9763-83f17e4fd05d" (UID: "74a0358a-8c91-42a5-9763-83f17e4fd05d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.996324 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74a0358a-8c91-42a5-9763-83f17e4fd05d-client-ca" (OuterVolumeSpecName: "client-ca") pod "74a0358a-8c91-42a5-9763-83f17e4fd05d" (UID: "74a0358a-8c91-42a5-9763-83f17e4fd05d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:11:22 crc kubenswrapper[4698]: I0216 00:11:22.996757 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae4b48ff-e232-4503-94ef-8acfdf2479f2-config" (OuterVolumeSpecName: "config") pod "ae4b48ff-e232-4503-94ef-8acfdf2479f2" (UID: "ae4b48ff-e232-4503-94ef-8acfdf2479f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.002348 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a0358a-8c91-42a5-9763-83f17e4fd05d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "74a0358a-8c91-42a5-9763-83f17e4fd05d" (UID: "74a0358a-8c91-42a5-9763-83f17e4fd05d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.002786 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4b48ff-e232-4503-94ef-8acfdf2479f2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ae4b48ff-e232-4503-94ef-8acfdf2479f2" (UID: "ae4b48ff-e232-4503-94ef-8acfdf2479f2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.002808 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a0358a-8c91-42a5-9763-83f17e4fd05d-kube-api-access-jbvjh" (OuterVolumeSpecName: "kube-api-access-jbvjh") pod "74a0358a-8c91-42a5-9763-83f17e4fd05d" (UID: "74a0358a-8c91-42a5-9763-83f17e4fd05d"). InnerVolumeSpecName "kube-api-access-jbvjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.002791 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae4b48ff-e232-4503-94ef-8acfdf2479f2-kube-api-access-xdkps" (OuterVolumeSpecName: "kube-api-access-xdkps") pod "ae4b48ff-e232-4503-94ef-8acfdf2479f2" (UID: "ae4b48ff-e232-4503-94ef-8acfdf2479f2"). InnerVolumeSpecName "kube-api-access-xdkps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.096683 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74a0358a-8c91-42a5-9763-83f17e4fd05d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.096732 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae4b48ff-e232-4503-94ef-8acfdf2479f2-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.096745 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74a0358a-8c91-42a5-9763-83f17e4fd05d-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.096761 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdkps\" (UniqueName: \"kubernetes.io/projected/ae4b48ff-e232-4503-94ef-8acfdf2479f2-kube-api-access-xdkps\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.096778 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74a0358a-8c91-42a5-9763-83f17e4fd05d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.096788 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbvjh\" (UniqueName: \"kubernetes.io/projected/74a0358a-8c91-42a5-9763-83f17e4fd05d-kube-api-access-jbvjh\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.096798 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74a0358a-8c91-42a5-9763-83f17e4fd05d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.096809 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae4b48ff-e232-4503-94ef-8acfdf2479f2-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.096818 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae4b48ff-e232-4503-94ef-8acfdf2479f2-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.210810 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55f66f7ccb-wxg45"] Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.215507 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-55f66f7ccb-wxg45"] Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.250451 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74a0358a-8c91-42a5-9763-83f17e4fd05d" path="/var/lib/kubelet/pods/74a0358a-8c91-42a5-9763-83f17e4fd05d/volumes" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.442810 4698 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.443188 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://d9aecad2fbf2cd40ec0007a04c48fc92ef9b913d9e30847857d1646f06b4de28" gracePeriod=5 Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.631127 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b4cf55777-gglzg"] Feb 16 00:11:23 crc kubenswrapper[4698]: E0216 00:11:23.631412 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a0358a-8c91-42a5-9763-83f17e4fd05d" containerName="controller-manager" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.631427 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a0358a-8c91-42a5-9763-83f17e4fd05d" containerName="controller-manager" Feb 16 00:11:23 crc kubenswrapper[4698]: E0216 00:11:23.631442 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae4b48ff-e232-4503-94ef-8acfdf2479f2" containerName="route-controller-manager" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.631448 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae4b48ff-e232-4503-94ef-8acfdf2479f2" containerName="route-controller-manager" Feb 16 00:11:23 crc kubenswrapper[4698]: E0216 00:11:23.631457 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.631464 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.631571 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae4b48ff-e232-4503-94ef-8acfdf2479f2" containerName="route-controller-manager" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.631581 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="74a0358a-8c91-42a5-9763-83f17e4fd05d" containerName="controller-manager" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.631593 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.632133 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.634259 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.635526 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.636407 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.636594 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.636588 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.637368 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.638345 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh"] Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.639457 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.646045 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.650873 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh"] Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.654388 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b4cf55777-gglzg"] Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.707595 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx2h4\" (UniqueName: \"kubernetes.io/projected/4842db21-c34f-472e-b045-f314e0327289-kube-api-access-hx2h4\") pod \"controller-manager-6b4cf55777-gglzg\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.707757 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j92l9\" (UniqueName: \"kubernetes.io/projected/d79b9bb4-f152-4adb-8457-759504526a41-kube-api-access-j92l9\") pod \"route-controller-manager-bd96f5fff-7cdgh\" (UID: \"d79b9bb4-f152-4adb-8457-759504526a41\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.707845 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4842db21-c34f-472e-b045-f314e0327289-config\") pod \"controller-manager-6b4cf55777-gglzg\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.707887 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d79b9bb4-f152-4adb-8457-759504526a41-config\") pod \"route-controller-manager-bd96f5fff-7cdgh\" (UID: \"d79b9bb4-f152-4adb-8457-759504526a41\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.707909 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4842db21-c34f-472e-b045-f314e0327289-serving-cert\") pod \"controller-manager-6b4cf55777-gglzg\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.707960 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4842db21-c34f-472e-b045-f314e0327289-client-ca\") pod \"controller-manager-6b4cf55777-gglzg\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.708053 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d79b9bb4-f152-4adb-8457-759504526a41-client-ca\") pod \"route-controller-manager-bd96f5fff-7cdgh\" (UID: \"d79b9bb4-f152-4adb-8457-759504526a41\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.708082 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d79b9bb4-f152-4adb-8457-759504526a41-serving-cert\") pod \"route-controller-manager-bd96f5fff-7cdgh\" (UID: \"d79b9bb4-f152-4adb-8457-759504526a41\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.708111 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4842db21-c34f-472e-b045-f314e0327289-proxy-ca-bundles\") pod \"controller-manager-6b4cf55777-gglzg\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.809417 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d79b9bb4-f152-4adb-8457-759504526a41-config\") pod \"route-controller-manager-bd96f5fff-7cdgh\" (UID: \"d79b9bb4-f152-4adb-8457-759504526a41\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.809983 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4842db21-c34f-472e-b045-f314e0327289-serving-cert\") pod \"controller-manager-6b4cf55777-gglzg\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.810869 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4842db21-c34f-472e-b045-f314e0327289-client-ca\") pod \"controller-manager-6b4cf55777-gglzg\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.810931 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d79b9bb4-f152-4adb-8457-759504526a41-client-ca\") pod \"route-controller-manager-bd96f5fff-7cdgh\" (UID: \"d79b9bb4-f152-4adb-8457-759504526a41\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.810962 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d79b9bb4-f152-4adb-8457-759504526a41-serving-cert\") pod \"route-controller-manager-bd96f5fff-7cdgh\" (UID: \"d79b9bb4-f152-4adb-8457-759504526a41\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.810991 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4842db21-c34f-472e-b045-f314e0327289-proxy-ca-bundles\") pod \"controller-manager-6b4cf55777-gglzg\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.811037 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx2h4\" (UniqueName: \"kubernetes.io/projected/4842db21-c34f-472e-b045-f314e0327289-kube-api-access-hx2h4\") pod \"controller-manager-6b4cf55777-gglzg\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.811076 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j92l9\" (UniqueName: \"kubernetes.io/projected/d79b9bb4-f152-4adb-8457-759504526a41-kube-api-access-j92l9\") pod \"route-controller-manager-bd96f5fff-7cdgh\" (UID: \"d79b9bb4-f152-4adb-8457-759504526a41\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.811114 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4842db21-c34f-472e-b045-f314e0327289-config\") pod \"controller-manager-6b4cf55777-gglzg\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.811437 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d79b9bb4-f152-4adb-8457-759504526a41-config\") pod \"route-controller-manager-bd96f5fff-7cdgh\" (UID: \"d79b9bb4-f152-4adb-8457-759504526a41\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.812492 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d79b9bb4-f152-4adb-8457-759504526a41-client-ca\") pod \"route-controller-manager-bd96f5fff-7cdgh\" (UID: \"d79b9bb4-f152-4adb-8457-759504526a41\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.812895 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4842db21-c34f-472e-b045-f314e0327289-client-ca\") pod \"controller-manager-6b4cf55777-gglzg\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.812956 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4842db21-c34f-472e-b045-f314e0327289-config\") pod \"controller-manager-6b4cf55777-gglzg\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.813971 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4842db21-c34f-472e-b045-f314e0327289-proxy-ca-bundles\") pod \"controller-manager-6b4cf55777-gglzg\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.817265 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d79b9bb4-f152-4adb-8457-759504526a41-serving-cert\") pod \"route-controller-manager-bd96f5fff-7cdgh\" (UID: \"d79b9bb4-f152-4adb-8457-759504526a41\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.818199 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4842db21-c34f-472e-b045-f314e0327289-serving-cert\") pod \"controller-manager-6b4cf55777-gglzg\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.845789 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j92l9\" (UniqueName: \"kubernetes.io/projected/d79b9bb4-f152-4adb-8457-759504526a41-kube-api-access-j92l9\") pod \"route-controller-manager-bd96f5fff-7cdgh\" (UID: \"d79b9bb4-f152-4adb-8457-759504526a41\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.847052 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx2h4\" (UniqueName: \"kubernetes.io/projected/4842db21-c34f-472e-b045-f314e0327289-kube-api-access-hx2h4\") pod \"controller-manager-6b4cf55777-gglzg\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.871955 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.896267 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9"] Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.901449 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd6b584d4-s6hj9"] Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.957975 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:23 crc kubenswrapper[4698]: I0216 00:11:23.973908 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" Feb 16 00:11:24 crc kubenswrapper[4698]: I0216 00:11:24.236920 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh"] Feb 16 00:11:24 crc kubenswrapper[4698]: I0216 00:11:24.390183 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b4cf55777-gglzg"] Feb 16 00:11:24 crc kubenswrapper[4698]: I0216 00:11:24.879570 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" event={"ID":"d79b9bb4-f152-4adb-8457-759504526a41","Type":"ContainerStarted","Data":"245d1f855b84317f0d00be4bc445ae8bde386f767b7071a88d6af992b0b06591"} Feb 16 00:11:24 crc kubenswrapper[4698]: I0216 00:11:24.879681 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" event={"ID":"d79b9bb4-f152-4adb-8457-759504526a41","Type":"ContainerStarted","Data":"0e3bbd7052a8ae08571f93bce98810695b9a57e16475cabf060e44b3f19bc8bf"} Feb 16 00:11:24 crc kubenswrapper[4698]: I0216 00:11:24.879917 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" Feb 16 00:11:24 crc kubenswrapper[4698]: I0216 00:11:24.884173 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" event={"ID":"4842db21-c34f-472e-b045-f314e0327289","Type":"ContainerStarted","Data":"5c55fe2917f3d7dc7d3641016d387d6b292df5f75b9bdfd04c3bead4d2baf060"} Feb 16 00:11:24 crc kubenswrapper[4698]: I0216 00:11:24.884206 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" event={"ID":"4842db21-c34f-472e-b045-f314e0327289","Type":"ContainerStarted","Data":"ebd0c9911413ee172e05028954d457c18bac5234348e0c72488105c131e78822"} Feb 16 00:11:24 crc kubenswrapper[4698]: I0216 00:11:24.884430 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:24 crc kubenswrapper[4698]: I0216 00:11:24.889846 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" Feb 16 00:11:24 crc kubenswrapper[4698]: I0216 00:11:24.893078 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:24 crc kubenswrapper[4698]: I0216 00:11:24.906603 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" podStartSLOduration=2.906581022 podStartE2EDuration="2.906581022s" podCreationTimestamp="2026-02-16 00:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:24.906005164 +0000 UTC m=+294.563903926" watchObservedRunningTime="2026-02-16 00:11:24.906581022 +0000 UTC m=+294.564479804" Feb 16 00:11:24 crc kubenswrapper[4698]: I0216 00:11:24.948573 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" podStartSLOduration=2.948542594 podStartE2EDuration="2.948542594s" podCreationTimestamp="2026-02-16 00:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:24.942864211 +0000 UTC m=+294.600762973" watchObservedRunningTime="2026-02-16 00:11:24.948542594 +0000 UTC m=+294.606441356" Feb 16 00:11:25 crc kubenswrapper[4698]: I0216 00:11:25.240080 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae4b48ff-e232-4503-94ef-8acfdf2479f2" path="/var/lib/kubelet/pods/ae4b48ff-e232-4503-94ef-8acfdf2479f2/volumes" Feb 16 00:11:28 crc kubenswrapper[4698]: I0216 00:11:28.917776 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 00:11:28 crc kubenswrapper[4698]: I0216 00:11:28.918083 4698 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="d9aecad2fbf2cd40ec0007a04c48fc92ef9b913d9e30847857d1646f06b4de28" exitCode=137 Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.050350 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.050474 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.192104 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.192195 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.192248 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.192267 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.192305 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.192332 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.192451 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.192411 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.192550 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.192936 4698 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.192965 4698 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.192979 4698 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.192993 4698 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.209993 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.247367 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.294718 4698 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.926586 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.926732 4698 scope.go:117] "RemoveContainer" containerID="d9aecad2fbf2cd40ec0007a04c48fc92ef9b913d9e30847857d1646f06b4de28" Feb 16 00:11:29 crc kubenswrapper[4698]: I0216 00:11:29.926793 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:11:30 crc kubenswrapper[4698]: I0216 00:11:30.966196 4698 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 16 00:11:32 crc kubenswrapper[4698]: I0216 00:11:32.957210 4698 generic.go:334] "Generic (PLEG): container finished" podID="9679204c-d5b6-489d-9f27-d84d360284ae" containerID="c720b800721bdd8f1712d4e0d15bc8527d36ea11c701fe5150fd9f4006c94cbe" exitCode=0 Feb 16 00:11:32 crc kubenswrapper[4698]: I0216 00:11:32.957359 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" event={"ID":"9679204c-d5b6-489d-9f27-d84d360284ae","Type":"ContainerDied","Data":"c720b800721bdd8f1712d4e0d15bc8527d36ea11c701fe5150fd9f4006c94cbe"} Feb 16 00:11:32 crc kubenswrapper[4698]: I0216 00:11:32.958494 4698 scope.go:117] "RemoveContainer" containerID="c720b800721bdd8f1712d4e0d15bc8527d36ea11c701fe5150fd9f4006c94cbe" Feb 16 00:11:33 crc kubenswrapper[4698]: I0216 00:11:33.969312 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" event={"ID":"9679204c-d5b6-489d-9f27-d84d360284ae","Type":"ContainerStarted","Data":"611486218e32af6680ed319c28df78ee2293b09dbc07b8e25f3c4a765dbb37ce"} Feb 16 00:11:33 crc kubenswrapper[4698]: I0216 00:11:33.970470 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" Feb 16 00:11:33 crc kubenswrapper[4698]: I0216 00:11:33.974281 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" Feb 16 00:11:42 crc kubenswrapper[4698]: I0216 00:11:42.359090 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b4cf55777-gglzg"] Feb 16 00:11:42 crc kubenswrapper[4698]: I0216 00:11:42.360172 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" podUID="4842db21-c34f-472e-b045-f314e0327289" containerName="controller-manager" containerID="cri-o://5c55fe2917f3d7dc7d3641016d387d6b292df5f75b9bdfd04c3bead4d2baf060" gracePeriod=30 Feb 16 00:11:42 crc kubenswrapper[4698]: I0216 00:11:42.390157 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh"] Feb 16 00:11:42 crc kubenswrapper[4698]: I0216 00:11:42.390358 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" podUID="d79b9bb4-f152-4adb-8457-759504526a41" containerName="route-controller-manager" containerID="cri-o://245d1f855b84317f0d00be4bc445ae8bde386f767b7071a88d6af992b0b06591" gracePeriod=30 Feb 16 00:11:42 crc kubenswrapper[4698]: I0216 00:11:42.949048 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:42 crc kubenswrapper[4698]: I0216 00:11:42.954349 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.025942 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d79b9bb4-f152-4adb-8457-759504526a41-client-ca\") pod \"d79b9bb4-f152-4adb-8457-759504526a41\" (UID: \"d79b9bb4-f152-4adb-8457-759504526a41\") " Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.025994 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4842db21-c34f-472e-b045-f314e0327289-config\") pod \"4842db21-c34f-472e-b045-f314e0327289\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.026023 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4842db21-c34f-472e-b045-f314e0327289-proxy-ca-bundles\") pod \"4842db21-c34f-472e-b045-f314e0327289\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.026048 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d79b9bb4-f152-4adb-8457-759504526a41-serving-cert\") pod \"d79b9bb4-f152-4adb-8457-759504526a41\" (UID: \"d79b9bb4-f152-4adb-8457-759504526a41\") " Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.026078 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4842db21-c34f-472e-b045-f314e0327289-serving-cert\") pod \"4842db21-c34f-472e-b045-f314e0327289\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.026105 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4842db21-c34f-472e-b045-f314e0327289-client-ca\") pod \"4842db21-c34f-472e-b045-f314e0327289\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.026131 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j92l9\" (UniqueName: \"kubernetes.io/projected/d79b9bb4-f152-4adb-8457-759504526a41-kube-api-access-j92l9\") pod \"d79b9bb4-f152-4adb-8457-759504526a41\" (UID: \"d79b9bb4-f152-4adb-8457-759504526a41\") " Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.026150 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d79b9bb4-f152-4adb-8457-759504526a41-config\") pod \"d79b9bb4-f152-4adb-8457-759504526a41\" (UID: \"d79b9bb4-f152-4adb-8457-759504526a41\") " Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.026180 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx2h4\" (UniqueName: \"kubernetes.io/projected/4842db21-c34f-472e-b045-f314e0327289-kube-api-access-hx2h4\") pod \"4842db21-c34f-472e-b045-f314e0327289\" (UID: \"4842db21-c34f-472e-b045-f314e0327289\") " Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.026988 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4842db21-c34f-472e-b045-f314e0327289-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4842db21-c34f-472e-b045-f314e0327289" (UID: "4842db21-c34f-472e-b045-f314e0327289"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.027019 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d79b9bb4-f152-4adb-8457-759504526a41-client-ca" (OuterVolumeSpecName: "client-ca") pod "d79b9bb4-f152-4adb-8457-759504526a41" (UID: "d79b9bb4-f152-4adb-8457-759504526a41"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.027254 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4842db21-c34f-472e-b045-f314e0327289-client-ca" (OuterVolumeSpecName: "client-ca") pod "4842db21-c34f-472e-b045-f314e0327289" (UID: "4842db21-c34f-472e-b045-f314e0327289"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.027667 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d79b9bb4-f152-4adb-8457-759504526a41-config" (OuterVolumeSpecName: "config") pod "d79b9bb4-f152-4adb-8457-759504526a41" (UID: "d79b9bb4-f152-4adb-8457-759504526a41"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.030685 4698 generic.go:334] "Generic (PLEG): container finished" podID="d79b9bb4-f152-4adb-8457-759504526a41" containerID="245d1f855b84317f0d00be4bc445ae8bde386f767b7071a88d6af992b0b06591" exitCode=0 Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.030815 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" event={"ID":"d79b9bb4-f152-4adb-8457-759504526a41","Type":"ContainerDied","Data":"245d1f855b84317f0d00be4bc445ae8bde386f767b7071a88d6af992b0b06591"} Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.030914 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" event={"ID":"d79b9bb4-f152-4adb-8457-759504526a41","Type":"ContainerDied","Data":"0e3bbd7052a8ae08571f93bce98810695b9a57e16475cabf060e44b3f19bc8bf"} Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.030992 4698 scope.go:117] "RemoveContainer" containerID="245d1f855b84317f0d00be4bc445ae8bde386f767b7071a88d6af992b0b06591" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.031162 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.033187 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4842db21-c34f-472e-b045-f314e0327289-config" (OuterVolumeSpecName: "config") pod "4842db21-c34f-472e-b045-f314e0327289" (UID: "4842db21-c34f-472e-b045-f314e0327289"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.034552 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4842db21-c34f-472e-b045-f314e0327289-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4842db21-c34f-472e-b045-f314e0327289" (UID: "4842db21-c34f-472e-b045-f314e0327289"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.034986 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d79b9bb4-f152-4adb-8457-759504526a41-kube-api-access-j92l9" (OuterVolumeSpecName: "kube-api-access-j92l9") pod "d79b9bb4-f152-4adb-8457-759504526a41" (UID: "d79b9bb4-f152-4adb-8457-759504526a41"). InnerVolumeSpecName "kube-api-access-j92l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.035243 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4842db21-c34f-472e-b045-f314e0327289-kube-api-access-hx2h4" (OuterVolumeSpecName: "kube-api-access-hx2h4") pod "4842db21-c34f-472e-b045-f314e0327289" (UID: "4842db21-c34f-472e-b045-f314e0327289"). InnerVolumeSpecName "kube-api-access-hx2h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.036978 4698 generic.go:334] "Generic (PLEG): container finished" podID="4842db21-c34f-472e-b045-f314e0327289" containerID="5c55fe2917f3d7dc7d3641016d387d6b292df5f75b9bdfd04c3bead4d2baf060" exitCode=0 Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.037036 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" event={"ID":"4842db21-c34f-472e-b045-f314e0327289","Type":"ContainerDied","Data":"5c55fe2917f3d7dc7d3641016d387d6b292df5f75b9bdfd04c3bead4d2baf060"} Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.037079 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" event={"ID":"4842db21-c34f-472e-b045-f314e0327289","Type":"ContainerDied","Data":"ebd0c9911413ee172e05028954d457c18bac5234348e0c72488105c131e78822"} Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.037189 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d79b9bb4-f152-4adb-8457-759504526a41-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d79b9bb4-f152-4adb-8457-759504526a41" (UID: "d79b9bb4-f152-4adb-8457-759504526a41"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.037545 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4cf55777-gglzg" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.092072 4698 scope.go:117] "RemoveContainer" containerID="245d1f855b84317f0d00be4bc445ae8bde386f767b7071a88d6af992b0b06591" Feb 16 00:11:43 crc kubenswrapper[4698]: E0216 00:11:43.092898 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"245d1f855b84317f0d00be4bc445ae8bde386f767b7071a88d6af992b0b06591\": container with ID starting with 245d1f855b84317f0d00be4bc445ae8bde386f767b7071a88d6af992b0b06591 not found: ID does not exist" containerID="245d1f855b84317f0d00be4bc445ae8bde386f767b7071a88d6af992b0b06591" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.092978 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"245d1f855b84317f0d00be4bc445ae8bde386f767b7071a88d6af992b0b06591"} err="failed to get container status \"245d1f855b84317f0d00be4bc445ae8bde386f767b7071a88d6af992b0b06591\": rpc error: code = NotFound desc = could not find container \"245d1f855b84317f0d00be4bc445ae8bde386f767b7071a88d6af992b0b06591\": container with ID starting with 245d1f855b84317f0d00be4bc445ae8bde386f767b7071a88d6af992b0b06591 not found: ID does not exist" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.093027 4698 scope.go:117] "RemoveContainer" containerID="5c55fe2917f3d7dc7d3641016d387d6b292df5f75b9bdfd04c3bead4d2baf060" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.105823 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b4cf55777-gglzg"] Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.111799 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b4cf55777-gglzg"] Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.125418 4698 scope.go:117] "RemoveContainer" containerID="5c55fe2917f3d7dc7d3641016d387d6b292df5f75b9bdfd04c3bead4d2baf060" Feb 16 00:11:43 crc kubenswrapper[4698]: E0216 00:11:43.126890 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c55fe2917f3d7dc7d3641016d387d6b292df5f75b9bdfd04c3bead4d2baf060\": container with ID starting with 5c55fe2917f3d7dc7d3641016d387d6b292df5f75b9bdfd04c3bead4d2baf060 not found: ID does not exist" containerID="5c55fe2917f3d7dc7d3641016d387d6b292df5f75b9bdfd04c3bead4d2baf060" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.127244 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d79b9bb4-f152-4adb-8457-759504526a41-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.127272 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4842db21-c34f-472e-b045-f314e0327289-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.127282 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4842db21-c34f-472e-b045-f314e0327289-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.127293 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d79b9bb4-f152-4adb-8457-759504526a41-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.127304 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4842db21-c34f-472e-b045-f314e0327289-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.127313 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4842db21-c34f-472e-b045-f314e0327289-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.127323 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j92l9\" (UniqueName: \"kubernetes.io/projected/d79b9bb4-f152-4adb-8457-759504526a41-kube-api-access-j92l9\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.127337 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d79b9bb4-f152-4adb-8457-759504526a41-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.127349 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx2h4\" (UniqueName: \"kubernetes.io/projected/4842db21-c34f-472e-b045-f314e0327289-kube-api-access-hx2h4\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.127799 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c55fe2917f3d7dc7d3641016d387d6b292df5f75b9bdfd04c3bead4d2baf060"} err="failed to get container status \"5c55fe2917f3d7dc7d3641016d387d6b292df5f75b9bdfd04c3bead4d2baf060\": rpc error: code = NotFound desc = could not find container \"5c55fe2917f3d7dc7d3641016d387d6b292df5f75b9bdfd04c3bead4d2baf060\": container with ID starting with 5c55fe2917f3d7dc7d3641016d387d6b292df5f75b9bdfd04c3bead4d2baf060 not found: ID does not exist" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.243876 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4842db21-c34f-472e-b045-f314e0327289" path="/var/lib/kubelet/pods/4842db21-c34f-472e-b045-f314e0327289/volumes" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.356261 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh"] Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.362799 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bd96f5fff-7cdgh"] Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.640982 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6dd96d466b-bq5pj"] Feb 16 00:11:43 crc kubenswrapper[4698]: E0216 00:11:43.641318 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d79b9bb4-f152-4adb-8457-759504526a41" containerName="route-controller-manager" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.641334 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d79b9bb4-f152-4adb-8457-759504526a41" containerName="route-controller-manager" Feb 16 00:11:43 crc kubenswrapper[4698]: E0216 00:11:43.641349 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4842db21-c34f-472e-b045-f314e0327289" containerName="controller-manager" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.641358 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4842db21-c34f-472e-b045-f314e0327289" containerName="controller-manager" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.641480 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4842db21-c34f-472e-b045-f314e0327289" containerName="controller-manager" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.641502 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d79b9bb4-f152-4adb-8457-759504526a41" containerName="route-controller-manager" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.641968 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.644722 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.644908 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.645147 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.645727 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.645954 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.649205 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn"] Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.650017 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.651437 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.651756 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.652394 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.652474 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.652696 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.652904 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.652841 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.653570 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.658044 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dd96d466b-bq5pj"] Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.679498 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn"] Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.734367 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82bd7752-d5ed-4f1f-a955-0cf569099e8c-serving-cert\") pod \"route-controller-manager-6bb87b5856-2d8mn\" (UID: \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.734431 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5161e472-d94d-43d9-89e5-b23967396030-config\") pod \"controller-manager-6dd96d466b-bq5pj\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.734454 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82bd7752-d5ed-4f1f-a955-0cf569099e8c-config\") pod \"route-controller-manager-6bb87b5856-2d8mn\" (UID: \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.734490 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxtjr\" (UniqueName: \"kubernetes.io/projected/5161e472-d94d-43d9-89e5-b23967396030-kube-api-access-pxtjr\") pod \"controller-manager-6dd96d466b-bq5pj\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.734510 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5161e472-d94d-43d9-89e5-b23967396030-proxy-ca-bundles\") pod \"controller-manager-6dd96d466b-bq5pj\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.734540 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5161e472-d94d-43d9-89e5-b23967396030-serving-cert\") pod \"controller-manager-6dd96d466b-bq5pj\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.734557 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5161e472-d94d-43d9-89e5-b23967396030-client-ca\") pod \"controller-manager-6dd96d466b-bq5pj\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.734584 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82bd7752-d5ed-4f1f-a955-0cf569099e8c-client-ca\") pod \"route-controller-manager-6bb87b5856-2d8mn\" (UID: \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.734609 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvjf2\" (UniqueName: \"kubernetes.io/projected/82bd7752-d5ed-4f1f-a955-0cf569099e8c-kube-api-access-dvjf2\") pod \"route-controller-manager-6bb87b5856-2d8mn\" (UID: \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.836365 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82bd7752-d5ed-4f1f-a955-0cf569099e8c-serving-cert\") pod \"route-controller-manager-6bb87b5856-2d8mn\" (UID: \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.836424 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5161e472-d94d-43d9-89e5-b23967396030-config\") pod \"controller-manager-6dd96d466b-bq5pj\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.836445 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82bd7752-d5ed-4f1f-a955-0cf569099e8c-config\") pod \"route-controller-manager-6bb87b5856-2d8mn\" (UID: \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.836477 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxtjr\" (UniqueName: \"kubernetes.io/projected/5161e472-d94d-43d9-89e5-b23967396030-kube-api-access-pxtjr\") pod \"controller-manager-6dd96d466b-bq5pj\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.836496 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5161e472-d94d-43d9-89e5-b23967396030-proxy-ca-bundles\") pod \"controller-manager-6dd96d466b-bq5pj\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.836523 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5161e472-d94d-43d9-89e5-b23967396030-serving-cert\") pod \"controller-manager-6dd96d466b-bq5pj\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.836537 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5161e472-d94d-43d9-89e5-b23967396030-client-ca\") pod \"controller-manager-6dd96d466b-bq5pj\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.836564 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82bd7752-d5ed-4f1f-a955-0cf569099e8c-client-ca\") pod \"route-controller-manager-6bb87b5856-2d8mn\" (UID: \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.836584 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvjf2\" (UniqueName: \"kubernetes.io/projected/82bd7752-d5ed-4f1f-a955-0cf569099e8c-kube-api-access-dvjf2\") pod \"route-controller-manager-6bb87b5856-2d8mn\" (UID: \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.837836 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5161e472-d94d-43d9-89e5-b23967396030-client-ca\") pod \"controller-manager-6dd96d466b-bq5pj\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.837942 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82bd7752-d5ed-4f1f-a955-0cf569099e8c-config\") pod \"route-controller-manager-6bb87b5856-2d8mn\" (UID: \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.838256 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82bd7752-d5ed-4f1f-a955-0cf569099e8c-client-ca\") pod \"route-controller-manager-6bb87b5856-2d8mn\" (UID: \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.838662 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5161e472-d94d-43d9-89e5-b23967396030-config\") pod \"controller-manager-6dd96d466b-bq5pj\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.838816 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5161e472-d94d-43d9-89e5-b23967396030-proxy-ca-bundles\") pod \"controller-manager-6dd96d466b-bq5pj\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.846940 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5161e472-d94d-43d9-89e5-b23967396030-serving-cert\") pod \"controller-manager-6dd96d466b-bq5pj\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.848212 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82bd7752-d5ed-4f1f-a955-0cf569099e8c-serving-cert\") pod \"route-controller-manager-6bb87b5856-2d8mn\" (UID: \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.855463 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvjf2\" (UniqueName: \"kubernetes.io/projected/82bd7752-d5ed-4f1f-a955-0cf569099e8c-kube-api-access-dvjf2\") pod \"route-controller-manager-6bb87b5856-2d8mn\" (UID: \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.863995 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxtjr\" (UniqueName: \"kubernetes.io/projected/5161e472-d94d-43d9-89e5-b23967396030-kube-api-access-pxtjr\") pod \"controller-manager-6dd96d466b-bq5pj\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.975689 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:11:43 crc kubenswrapper[4698]: I0216 00:11:43.985660 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" Feb 16 00:11:44 crc kubenswrapper[4698]: I0216 00:11:44.438686 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dd96d466b-bq5pj"] Feb 16 00:11:44 crc kubenswrapper[4698]: W0216 00:11:44.447775 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5161e472_d94d_43d9_89e5_b23967396030.slice/crio-6adea1f2de4b778f7d6c80c1c4e9e8186a60ea7f7edf022676afb1b43ad99c5a WatchSource:0}: Error finding container 6adea1f2de4b778f7d6c80c1c4e9e8186a60ea7f7edf022676afb1b43ad99c5a: Status 404 returned error can't find the container with id 6adea1f2de4b778f7d6c80c1c4e9e8186a60ea7f7edf022676afb1b43ad99c5a Feb 16 00:11:44 crc kubenswrapper[4698]: I0216 00:11:44.507797 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn"] Feb 16 00:11:44 crc kubenswrapper[4698]: W0216 00:11:44.513474 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82bd7752_d5ed_4f1f_a955_0cf569099e8c.slice/crio-05fc4148df2e714c700eac67640496280c71f6d9ffbfb197d3b8d531f9906645 WatchSource:0}: Error finding container 05fc4148df2e714c700eac67640496280c71f6d9ffbfb197d3b8d531f9906645: Status 404 returned error can't find the container with id 05fc4148df2e714c700eac67640496280c71f6d9ffbfb197d3b8d531f9906645 Feb 16 00:11:45 crc kubenswrapper[4698]: I0216 00:11:45.069054 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" event={"ID":"82bd7752-d5ed-4f1f-a955-0cf569099e8c","Type":"ContainerStarted","Data":"a8200cd20c8d83d8e958886ef1b623dac8cb3f9e1e3d588b0703709c5c32a036"} Feb 16 00:11:45 crc kubenswrapper[4698]: I0216 00:11:45.069501 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" Feb 16 00:11:45 crc kubenswrapper[4698]: I0216 00:11:45.069514 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" event={"ID":"82bd7752-d5ed-4f1f-a955-0cf569099e8c","Type":"ContainerStarted","Data":"05fc4148df2e714c700eac67640496280c71f6d9ffbfb197d3b8d531f9906645"} Feb 16 00:11:45 crc kubenswrapper[4698]: I0216 00:11:45.071052 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" event={"ID":"5161e472-d94d-43d9-89e5-b23967396030","Type":"ContainerStarted","Data":"a71b2cf9f4ea3b49c543bd3c239a47077482b804455da9acb273a99b7239119e"} Feb 16 00:11:45 crc kubenswrapper[4698]: I0216 00:11:45.071105 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" event={"ID":"5161e472-d94d-43d9-89e5-b23967396030","Type":"ContainerStarted","Data":"6adea1f2de4b778f7d6c80c1c4e9e8186a60ea7f7edf022676afb1b43ad99c5a"} Feb 16 00:11:45 crc kubenswrapper[4698]: I0216 00:11:45.095501 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" podStartSLOduration=3.095482551 podStartE2EDuration="3.095482551s" podCreationTimestamp="2026-02-16 00:11:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:45.09544234 +0000 UTC m=+314.753341102" watchObservedRunningTime="2026-02-16 00:11:45.095482551 +0000 UTC m=+314.753381313" Feb 16 00:11:45 crc kubenswrapper[4698]: I0216 00:11:45.122483 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" podStartSLOduration=3.122466596 podStartE2EDuration="3.122466596s" podCreationTimestamp="2026-02-16 00:11:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:45.119180845 +0000 UTC m=+314.777079617" watchObservedRunningTime="2026-02-16 00:11:45.122466596 +0000 UTC m=+314.780365358" Feb 16 00:11:45 crc kubenswrapper[4698]: I0216 00:11:45.239360 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d79b9bb4-f152-4adb-8457-759504526a41" path="/var/lib/kubelet/pods/d79b9bb4-f152-4adb-8457-759504526a41/volumes" Feb 16 00:11:45 crc kubenswrapper[4698]: I0216 00:11:45.459175 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" Feb 16 00:11:46 crc kubenswrapper[4698]: I0216 00:11:46.076694 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:11:46 crc kubenswrapper[4698]: I0216 00:11:46.082506 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:12:22 crc kubenswrapper[4698]: I0216 00:12:22.375539 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn"] Feb 16 00:12:22 crc kubenswrapper[4698]: I0216 00:12:22.376640 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" podUID="82bd7752-d5ed-4f1f-a955-0cf569099e8c" containerName="route-controller-manager" containerID="cri-o://a8200cd20c8d83d8e958886ef1b623dac8cb3f9e1e3d588b0703709c5c32a036" gracePeriod=30 Feb 16 00:12:22 crc kubenswrapper[4698]: I0216 00:12:22.696543 4698 generic.go:334] "Generic (PLEG): container finished" podID="82bd7752-d5ed-4f1f-a955-0cf569099e8c" containerID="a8200cd20c8d83d8e958886ef1b623dac8cb3f9e1e3d588b0703709c5c32a036" exitCode=0 Feb 16 00:12:22 crc kubenswrapper[4698]: I0216 00:12:22.696588 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" event={"ID":"82bd7752-d5ed-4f1f-a955-0cf569099e8c","Type":"ContainerDied","Data":"a8200cd20c8d83d8e958886ef1b623dac8cb3f9e1e3d588b0703709c5c32a036"} Feb 16 00:12:22 crc kubenswrapper[4698]: I0216 00:12:22.852415 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" Feb 16 00:12:22 crc kubenswrapper[4698]: I0216 00:12:22.993800 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvjf2\" (UniqueName: \"kubernetes.io/projected/82bd7752-d5ed-4f1f-a955-0cf569099e8c-kube-api-access-dvjf2\") pod \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\" (UID: \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\") " Feb 16 00:12:22 crc kubenswrapper[4698]: I0216 00:12:22.993903 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82bd7752-d5ed-4f1f-a955-0cf569099e8c-config\") pod \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\" (UID: \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\") " Feb 16 00:12:22 crc kubenswrapper[4698]: I0216 00:12:22.994014 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82bd7752-d5ed-4f1f-a955-0cf569099e8c-client-ca\") pod \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\" (UID: \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\") " Feb 16 00:12:22 crc kubenswrapper[4698]: I0216 00:12:22.995148 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82bd7752-d5ed-4f1f-a955-0cf569099e8c-client-ca" (OuterVolumeSpecName: "client-ca") pod "82bd7752-d5ed-4f1f-a955-0cf569099e8c" (UID: "82bd7752-d5ed-4f1f-a955-0cf569099e8c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:12:22 crc kubenswrapper[4698]: I0216 00:12:22.995226 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82bd7752-d5ed-4f1f-a955-0cf569099e8c-config" (OuterVolumeSpecName: "config") pod "82bd7752-d5ed-4f1f-a955-0cf569099e8c" (UID: "82bd7752-d5ed-4f1f-a955-0cf569099e8c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:12:22 crc kubenswrapper[4698]: I0216 00:12:22.995513 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82bd7752-d5ed-4f1f-a955-0cf569099e8c-serving-cert\") pod \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\" (UID: \"82bd7752-d5ed-4f1f-a955-0cf569099e8c\") " Feb 16 00:12:22 crc kubenswrapper[4698]: I0216 00:12:22.996106 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82bd7752-d5ed-4f1f-a955-0cf569099e8c-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:22 crc kubenswrapper[4698]: I0216 00:12:22.996130 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82bd7752-d5ed-4f1f-a955-0cf569099e8c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.002211 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82bd7752-d5ed-4f1f-a955-0cf569099e8c-kube-api-access-dvjf2" (OuterVolumeSpecName: "kube-api-access-dvjf2") pod "82bd7752-d5ed-4f1f-a955-0cf569099e8c" (UID: "82bd7752-d5ed-4f1f-a955-0cf569099e8c"). InnerVolumeSpecName "kube-api-access-dvjf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.012104 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82bd7752-d5ed-4f1f-a955-0cf569099e8c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "82bd7752-d5ed-4f1f-a955-0cf569099e8c" (UID: "82bd7752-d5ed-4f1f-a955-0cf569099e8c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.098208 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82bd7752-d5ed-4f1f-a955-0cf569099e8c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.098280 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvjf2\" (UniqueName: \"kubernetes.io/projected/82bd7752-d5ed-4f1f-a955-0cf569099e8c-kube-api-access-dvjf2\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.670863 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs"] Feb 16 00:12:23 crc kubenswrapper[4698]: E0216 00:12:23.671199 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82bd7752-d5ed-4f1f-a955-0cf569099e8c" containerName="route-controller-manager" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.671219 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="82bd7752-d5ed-4f1f-a955-0cf569099e8c" containerName="route-controller-manager" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.671456 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="82bd7752-d5ed-4f1f-a955-0cf569099e8c" containerName="route-controller-manager" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.672347 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.679958 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs"] Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.707421 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" event={"ID":"82bd7752-d5ed-4f1f-a955-0cf569099e8c","Type":"ContainerDied","Data":"05fc4148df2e714c700eac67640496280c71f6d9ffbfb197d3b8d531f9906645"} Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.707483 4698 scope.go:117] "RemoveContainer" containerID="a8200cd20c8d83d8e958886ef1b623dac8cb3f9e1e3d588b0703709c5c32a036" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.707513 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.728075 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn"] Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.731153 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb87b5856-2d8mn"] Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.807671 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c92c77ab-39d2-4313-a8fc-48ea683e01f2-config\") pod \"route-controller-manager-bd96f5fff-fcfxs\" (UID: \"c92c77ab-39d2-4313-a8fc-48ea683e01f2\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.807774 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c92c77ab-39d2-4313-a8fc-48ea683e01f2-serving-cert\") pod \"route-controller-manager-bd96f5fff-fcfxs\" (UID: \"c92c77ab-39d2-4313-a8fc-48ea683e01f2\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.807837 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh44j\" (UniqueName: \"kubernetes.io/projected/c92c77ab-39d2-4313-a8fc-48ea683e01f2-kube-api-access-hh44j\") pod \"route-controller-manager-bd96f5fff-fcfxs\" (UID: \"c92c77ab-39d2-4313-a8fc-48ea683e01f2\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.807911 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c92c77ab-39d2-4313-a8fc-48ea683e01f2-client-ca\") pod \"route-controller-manager-bd96f5fff-fcfxs\" (UID: \"c92c77ab-39d2-4313-a8fc-48ea683e01f2\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.909272 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c92c77ab-39d2-4313-a8fc-48ea683e01f2-config\") pod \"route-controller-manager-bd96f5fff-fcfxs\" (UID: \"c92c77ab-39d2-4313-a8fc-48ea683e01f2\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.909395 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c92c77ab-39d2-4313-a8fc-48ea683e01f2-serving-cert\") pod \"route-controller-manager-bd96f5fff-fcfxs\" (UID: \"c92c77ab-39d2-4313-a8fc-48ea683e01f2\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.909476 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh44j\" (UniqueName: \"kubernetes.io/projected/c92c77ab-39d2-4313-a8fc-48ea683e01f2-kube-api-access-hh44j\") pod \"route-controller-manager-bd96f5fff-fcfxs\" (UID: \"c92c77ab-39d2-4313-a8fc-48ea683e01f2\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.909528 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c92c77ab-39d2-4313-a8fc-48ea683e01f2-client-ca\") pod \"route-controller-manager-bd96f5fff-fcfxs\" (UID: \"c92c77ab-39d2-4313-a8fc-48ea683e01f2\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.911269 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c92c77ab-39d2-4313-a8fc-48ea683e01f2-client-ca\") pod \"route-controller-manager-bd96f5fff-fcfxs\" (UID: \"c92c77ab-39d2-4313-a8fc-48ea683e01f2\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.911566 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c92c77ab-39d2-4313-a8fc-48ea683e01f2-config\") pod \"route-controller-manager-bd96f5fff-fcfxs\" (UID: \"c92c77ab-39d2-4313-a8fc-48ea683e01f2\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.919228 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c92c77ab-39d2-4313-a8fc-48ea683e01f2-serving-cert\") pod \"route-controller-manager-bd96f5fff-fcfxs\" (UID: \"c92c77ab-39d2-4313-a8fc-48ea683e01f2\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.929567 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh44j\" (UniqueName: \"kubernetes.io/projected/c92c77ab-39d2-4313-a8fc-48ea683e01f2-kube-api-access-hh44j\") pod \"route-controller-manager-bd96f5fff-fcfxs\" (UID: \"c92c77ab-39d2-4313-a8fc-48ea683e01f2\") " pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs" Feb 16 00:12:23 crc kubenswrapper[4698]: I0216 00:12:23.994241 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs" Feb 16 00:12:24 crc kubenswrapper[4698]: I0216 00:12:24.461761 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs"] Feb 16 00:12:24 crc kubenswrapper[4698]: I0216 00:12:24.715667 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs" event={"ID":"c92c77ab-39d2-4313-a8fc-48ea683e01f2","Type":"ContainerStarted","Data":"99e882c9ddc885cbce6d29d5998fffb396ffa9c3fc3ea0c1daf719992d9cd49c"} Feb 16 00:12:24 crc kubenswrapper[4698]: I0216 00:12:24.716133 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs" event={"ID":"c92c77ab-39d2-4313-a8fc-48ea683e01f2","Type":"ContainerStarted","Data":"dcd01e54e98267eaaf3c7c0debb877c79578b4c9270ab81fdd0e37d50435b965"} Feb 16 00:12:24 crc kubenswrapper[4698]: I0216 00:12:24.716692 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs" Feb 16 00:12:24 crc kubenswrapper[4698]: I0216 00:12:24.738074 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs" podStartSLOduration=2.738048276 podStartE2EDuration="2.738048276s" podCreationTimestamp="2026-02-16 00:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:12:24.734088607 +0000 UTC m=+354.391987369" watchObservedRunningTime="2026-02-16 00:12:24.738048276 +0000 UTC m=+354.395947048" Feb 16 00:12:25 crc kubenswrapper[4698]: I0216 00:12:25.084708 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-bd96f5fff-fcfxs" Feb 16 00:12:25 crc kubenswrapper[4698]: I0216 00:12:25.238361 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82bd7752-d5ed-4f1f-a955-0cf569099e8c" path="/var/lib/kubelet/pods/82bd7752-d5ed-4f1f-a955-0cf569099e8c/volumes" Feb 16 00:12:27 crc kubenswrapper[4698]: I0216 00:12:27.045424 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:12:27 crc kubenswrapper[4698]: I0216 00:12:27.045798 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.359923 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dd96d466b-bq5pj"] Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.360701 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" podUID="5161e472-d94d-43d9-89e5-b23967396030" containerName="controller-manager" containerID="cri-o://a71b2cf9f4ea3b49c543bd3c239a47077482b804455da9acb273a99b7239119e" gracePeriod=30 Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.747006 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.769735 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5161e472-d94d-43d9-89e5-b23967396030-serving-cert\") pod \"5161e472-d94d-43d9-89e5-b23967396030\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.770554 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5161e472-d94d-43d9-89e5-b23967396030-config\") pod \"5161e472-d94d-43d9-89e5-b23967396030\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.770655 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxtjr\" (UniqueName: \"kubernetes.io/projected/5161e472-d94d-43d9-89e5-b23967396030-kube-api-access-pxtjr\") pod \"5161e472-d94d-43d9-89e5-b23967396030\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.770733 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5161e472-d94d-43d9-89e5-b23967396030-proxy-ca-bundles\") pod \"5161e472-d94d-43d9-89e5-b23967396030\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.771131 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5161e472-d94d-43d9-89e5-b23967396030-client-ca\") pod \"5161e472-d94d-43d9-89e5-b23967396030\" (UID: \"5161e472-d94d-43d9-89e5-b23967396030\") " Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.773065 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5161e472-d94d-43d9-89e5-b23967396030-config" (OuterVolumeSpecName: "config") pod "5161e472-d94d-43d9-89e5-b23967396030" (UID: "5161e472-d94d-43d9-89e5-b23967396030"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.773140 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5161e472-d94d-43d9-89e5-b23967396030-client-ca" (OuterVolumeSpecName: "client-ca") pod "5161e472-d94d-43d9-89e5-b23967396030" (UID: "5161e472-d94d-43d9-89e5-b23967396030"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.775150 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5161e472-d94d-43d9-89e5-b23967396030-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5161e472-d94d-43d9-89e5-b23967396030" (UID: "5161e472-d94d-43d9-89e5-b23967396030"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.784909 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5161e472-d94d-43d9-89e5-b23967396030-kube-api-access-pxtjr" (OuterVolumeSpecName: "kube-api-access-pxtjr") pod "5161e472-d94d-43d9-89e5-b23967396030" (UID: "5161e472-d94d-43d9-89e5-b23967396030"). InnerVolumeSpecName "kube-api-access-pxtjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.786816 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5161e472-d94d-43d9-89e5-b23967396030-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5161e472-d94d-43d9-89e5-b23967396030" (UID: "5161e472-d94d-43d9-89e5-b23967396030"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.825942 4698 generic.go:334] "Generic (PLEG): container finished" podID="5161e472-d94d-43d9-89e5-b23967396030" containerID="a71b2cf9f4ea3b49c543bd3c239a47077482b804455da9acb273a99b7239119e" exitCode=0 Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.825998 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" event={"ID":"5161e472-d94d-43d9-89e5-b23967396030","Type":"ContainerDied","Data":"a71b2cf9f4ea3b49c543bd3c239a47077482b804455da9acb273a99b7239119e"} Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.826053 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" event={"ID":"5161e472-d94d-43d9-89e5-b23967396030","Type":"ContainerDied","Data":"6adea1f2de4b778f7d6c80c1c4e9e8186a60ea7f7edf022676afb1b43ad99c5a"} Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.826049 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd96d466b-bq5pj" Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.826154 4698 scope.go:117] "RemoveContainer" containerID="a71b2cf9f4ea3b49c543bd3c239a47077482b804455da9acb273a99b7239119e" Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.847309 4698 scope.go:117] "RemoveContainer" containerID="a71b2cf9f4ea3b49c543bd3c239a47077482b804455da9acb273a99b7239119e" Feb 16 00:12:42 crc kubenswrapper[4698]: E0216 00:12:42.853570 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a71b2cf9f4ea3b49c543bd3c239a47077482b804455da9acb273a99b7239119e\": container with ID starting with a71b2cf9f4ea3b49c543bd3c239a47077482b804455da9acb273a99b7239119e not found: ID does not exist" containerID="a71b2cf9f4ea3b49c543bd3c239a47077482b804455da9acb273a99b7239119e" Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.853633 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a71b2cf9f4ea3b49c543bd3c239a47077482b804455da9acb273a99b7239119e"} err="failed to get container status \"a71b2cf9f4ea3b49c543bd3c239a47077482b804455da9acb273a99b7239119e\": rpc error: code = NotFound desc = could not find container \"a71b2cf9f4ea3b49c543bd3c239a47077482b804455da9acb273a99b7239119e\": container with ID starting with a71b2cf9f4ea3b49c543bd3c239a47077482b804455da9acb273a99b7239119e not found: ID does not exist" Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.854502 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dd96d466b-bq5pj"] Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.858985 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6dd96d466b-bq5pj"] Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.873930 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5161e472-d94d-43d9-89e5-b23967396030-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.873988 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5161e472-d94d-43d9-89e5-b23967396030-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.874001 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5161e472-d94d-43d9-89e5-b23967396030-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.874015 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5161e472-d94d-43d9-89e5-b23967396030-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:42 crc kubenswrapper[4698]: I0216 00:12:42.874029 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxtjr\" (UniqueName: \"kubernetes.io/projected/5161e472-d94d-43d9-89e5-b23967396030-kube-api-access-pxtjr\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.245593 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5161e472-d94d-43d9-89e5-b23967396030" path="/var/lib/kubelet/pods/5161e472-d94d-43d9-89e5-b23967396030/volumes" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.697374 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b4cf55777-clc5r"] Feb 16 00:12:43 crc kubenswrapper[4698]: E0216 00:12:43.697782 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5161e472-d94d-43d9-89e5-b23967396030" containerName="controller-manager" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.697806 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5161e472-d94d-43d9-89e5-b23967396030" containerName="controller-manager" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.698025 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="5161e472-d94d-43d9-89e5-b23967396030" containerName="controller-manager" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.698657 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.701403 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.703329 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.703909 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.704259 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.704360 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.705798 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.713345 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.717186 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b4cf55777-clc5r"] Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.785386 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66ab1a60-a3c2-4163-9de7-e0be04bac0fe-config\") pod \"controller-manager-6b4cf55777-clc5r\" (UID: \"66ab1a60-a3c2-4163-9de7-e0be04bac0fe\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.785484 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/66ab1a60-a3c2-4163-9de7-e0be04bac0fe-proxy-ca-bundles\") pod \"controller-manager-6b4cf55777-clc5r\" (UID: \"66ab1a60-a3c2-4163-9de7-e0be04bac0fe\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.785524 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/66ab1a60-a3c2-4163-9de7-e0be04bac0fe-client-ca\") pod \"controller-manager-6b4cf55777-clc5r\" (UID: \"66ab1a60-a3c2-4163-9de7-e0be04bac0fe\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.785550 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66ab1a60-a3c2-4163-9de7-e0be04bac0fe-serving-cert\") pod \"controller-manager-6b4cf55777-clc5r\" (UID: \"66ab1a60-a3c2-4163-9de7-e0be04bac0fe\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.785596 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ctjr\" (UniqueName: \"kubernetes.io/projected/66ab1a60-a3c2-4163-9de7-e0be04bac0fe-kube-api-access-8ctjr\") pod \"controller-manager-6b4cf55777-clc5r\" (UID: \"66ab1a60-a3c2-4163-9de7-e0be04bac0fe\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.886978 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/66ab1a60-a3c2-4163-9de7-e0be04bac0fe-client-ca\") pod \"controller-manager-6b4cf55777-clc5r\" (UID: \"66ab1a60-a3c2-4163-9de7-e0be04bac0fe\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.887051 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66ab1a60-a3c2-4163-9de7-e0be04bac0fe-serving-cert\") pod \"controller-manager-6b4cf55777-clc5r\" (UID: \"66ab1a60-a3c2-4163-9de7-e0be04bac0fe\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.887099 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ctjr\" (UniqueName: \"kubernetes.io/projected/66ab1a60-a3c2-4163-9de7-e0be04bac0fe-kube-api-access-8ctjr\") pod \"controller-manager-6b4cf55777-clc5r\" (UID: \"66ab1a60-a3c2-4163-9de7-e0be04bac0fe\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.887166 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66ab1a60-a3c2-4163-9de7-e0be04bac0fe-config\") pod \"controller-manager-6b4cf55777-clc5r\" (UID: \"66ab1a60-a3c2-4163-9de7-e0be04bac0fe\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.887271 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/66ab1a60-a3c2-4163-9de7-e0be04bac0fe-proxy-ca-bundles\") pod \"controller-manager-6b4cf55777-clc5r\" (UID: \"66ab1a60-a3c2-4163-9de7-e0be04bac0fe\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.888812 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/66ab1a60-a3c2-4163-9de7-e0be04bac0fe-client-ca\") pod \"controller-manager-6b4cf55777-clc5r\" (UID: \"66ab1a60-a3c2-4163-9de7-e0be04bac0fe\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.889516 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/66ab1a60-a3c2-4163-9de7-e0be04bac0fe-proxy-ca-bundles\") pod \"controller-manager-6b4cf55777-clc5r\" (UID: \"66ab1a60-a3c2-4163-9de7-e0be04bac0fe\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.890215 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66ab1a60-a3c2-4163-9de7-e0be04bac0fe-config\") pod \"controller-manager-6b4cf55777-clc5r\" (UID: \"66ab1a60-a3c2-4163-9de7-e0be04bac0fe\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.891592 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66ab1a60-a3c2-4163-9de7-e0be04bac0fe-serving-cert\") pod \"controller-manager-6b4cf55777-clc5r\" (UID: \"66ab1a60-a3c2-4163-9de7-e0be04bac0fe\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" Feb 16 00:12:43 crc kubenswrapper[4698]: I0216 00:12:43.905675 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ctjr\" (UniqueName: \"kubernetes.io/projected/66ab1a60-a3c2-4163-9de7-e0be04bac0fe-kube-api-access-8ctjr\") pod \"controller-manager-6b4cf55777-clc5r\" (UID: \"66ab1a60-a3c2-4163-9de7-e0be04bac0fe\") " pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" Feb 16 00:12:44 crc kubenswrapper[4698]: I0216 00:12:44.025136 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" Feb 16 00:12:44 crc kubenswrapper[4698]: W0216 00:12:44.270886 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66ab1a60_a3c2_4163_9de7_e0be04bac0fe.slice/crio-d537d3fa076b0ee9643a1e351447663365bb10f9ac1805df3d80993b1ec7d239 WatchSource:0}: Error finding container d537d3fa076b0ee9643a1e351447663365bb10f9ac1805df3d80993b1ec7d239: Status 404 returned error can't find the container with id d537d3fa076b0ee9643a1e351447663365bb10f9ac1805df3d80993b1ec7d239 Feb 16 00:12:44 crc kubenswrapper[4698]: I0216 00:12:44.273193 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b4cf55777-clc5r"] Feb 16 00:12:44 crc kubenswrapper[4698]: I0216 00:12:44.839100 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" event={"ID":"66ab1a60-a3c2-4163-9de7-e0be04bac0fe","Type":"ContainerStarted","Data":"06bfdf81be75aee50176e518612488f5ed67fba469e7e01063482263b9ea261b"} Feb 16 00:12:44 crc kubenswrapper[4698]: I0216 00:12:44.839775 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" event={"ID":"66ab1a60-a3c2-4163-9de7-e0be04bac0fe","Type":"ContainerStarted","Data":"d537d3fa076b0ee9643a1e351447663365bb10f9ac1805df3d80993b1ec7d239"} Feb 16 00:12:44 crc kubenswrapper[4698]: I0216 00:12:44.839807 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" Feb 16 00:12:44 crc kubenswrapper[4698]: I0216 00:12:44.844338 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" Feb 16 00:12:44 crc kubenswrapper[4698]: I0216 00:12:44.863232 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b4cf55777-clc5r" podStartSLOduration=2.863215982 podStartE2EDuration="2.863215982s" podCreationTimestamp="2026-02-16 00:12:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:12:44.86038753 +0000 UTC m=+374.518286292" watchObservedRunningTime="2026-02-16 00:12:44.863215982 +0000 UTC m=+374.521114744" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.263263 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-whzh9"] Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.265084 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.279961 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-whzh9"] Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.333857 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/67b049dc-e073-4dee-8f2d-2b0ca9202d93-ca-trust-extracted\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.333925 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/67b049dc-e073-4dee-8f2d-2b0ca9202d93-registry-tls\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.333953 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67b049dc-e073-4dee-8f2d-2b0ca9202d93-bound-sa-token\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.334029 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.334062 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/67b049dc-e073-4dee-8f2d-2b0ca9202d93-installation-pull-secrets\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.334088 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsxql\" (UniqueName: \"kubernetes.io/projected/67b049dc-e073-4dee-8f2d-2b0ca9202d93-kube-api-access-zsxql\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.334345 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67b049dc-e073-4dee-8f2d-2b0ca9202d93-trusted-ca\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.334421 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/67b049dc-e073-4dee-8f2d-2b0ca9202d93-registry-certificates\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.360279 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.435145 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/67b049dc-e073-4dee-8f2d-2b0ca9202d93-ca-trust-extracted\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.435195 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/67b049dc-e073-4dee-8f2d-2b0ca9202d93-registry-tls\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.435210 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67b049dc-e073-4dee-8f2d-2b0ca9202d93-bound-sa-token\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.435247 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/67b049dc-e073-4dee-8f2d-2b0ca9202d93-installation-pull-secrets\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.435295 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsxql\" (UniqueName: \"kubernetes.io/projected/67b049dc-e073-4dee-8f2d-2b0ca9202d93-kube-api-access-zsxql\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.435333 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67b049dc-e073-4dee-8f2d-2b0ca9202d93-trusted-ca\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.435355 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/67b049dc-e073-4dee-8f2d-2b0ca9202d93-registry-certificates\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.436378 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/67b049dc-e073-4dee-8f2d-2b0ca9202d93-ca-trust-extracted\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.436959 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/67b049dc-e073-4dee-8f2d-2b0ca9202d93-registry-certificates\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.437330 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67b049dc-e073-4dee-8f2d-2b0ca9202d93-trusted-ca\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.443805 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/67b049dc-e073-4dee-8f2d-2b0ca9202d93-installation-pull-secrets\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.445954 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/67b049dc-e073-4dee-8f2d-2b0ca9202d93-registry-tls\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.457233 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67b049dc-e073-4dee-8f2d-2b0ca9202d93-bound-sa-token\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.458571 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsxql\" (UniqueName: \"kubernetes.io/projected/67b049dc-e073-4dee-8f2d-2b0ca9202d93-kube-api-access-zsxql\") pod \"image-registry-66df7c8f76-whzh9\" (UID: \"67b049dc-e073-4dee-8f2d-2b0ca9202d93\") " pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:46 crc kubenswrapper[4698]: I0216 00:12:46.588287 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.071913 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-whzh9"] Feb 16 00:12:47 crc kubenswrapper[4698]: W0216 00:12:47.092907 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67b049dc_e073_4dee_8f2d_2b0ca9202d93.slice/crio-1168dddd6a1752aeb43dab8b862ffee88466cae4881cd8194d54290a6a4a20a2 WatchSource:0}: Error finding container 1168dddd6a1752aeb43dab8b862ffee88466cae4881cd8194d54290a6a4a20a2: Status 404 returned error can't find the container with id 1168dddd6a1752aeb43dab8b862ffee88466cae4881cd8194d54290a6a4a20a2 Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.102684 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vrkd2"] Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.103020 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vrkd2" podUID="b2921317-af1c-4c00-b999-99897d66aaba" containerName="registry-server" containerID="cri-o://c860fe262c3be3ff0ca7f4cf045ff4e8316ef15655b3f38b24d34e11567ff95e" gracePeriod=30 Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.115501 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4644p"] Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.116133 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4644p" podUID="d741b08c-0e5a-40aa-ba0b-6f11743daa22" containerName="registry-server" containerID="cri-o://12b8f04efd700727fe674c220ba29b28bb2a066b842cbf237e10d12d68ecb3db" gracePeriod=30 Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.122372 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wdfct"] Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.122678 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" podUID="9679204c-d5b6-489d-9f27-d84d360284ae" containerName="marketplace-operator" containerID="cri-o://611486218e32af6680ed319c28df78ee2293b09dbc07b8e25f3c4a765dbb37ce" gracePeriod=30 Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.131089 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gqp8r"] Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.131395 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gqp8r" podUID="6bb57b35-984c-43d1-8c3e-d3311bb457f4" containerName="registry-server" containerID="cri-o://0dc6058affc62865433b4330f35cb918817584b0e5012c302e6000fd40528add" gracePeriod=30 Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.159648 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bwn9c"] Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.159890 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bwn9c" podUID="194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c" containerName="registry-server" containerID="cri-o://24db8f4df12384613787b8650db29affa1ca6d704b141a4b34a86a2b4aa42aca" gracePeriod=30 Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.174717 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-222qh"] Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.175645 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-222qh" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.194412 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-222qh"] Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.256784 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qghf2\" (UniqueName: \"kubernetes.io/projected/ce91d9bb-94cd-4bd8-8116-1add3e921236-kube-api-access-qghf2\") pod \"marketplace-operator-79b997595-222qh\" (UID: \"ce91d9bb-94cd-4bd8-8116-1add3e921236\") " pod="openshift-marketplace/marketplace-operator-79b997595-222qh" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.257592 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce91d9bb-94cd-4bd8-8116-1add3e921236-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-222qh\" (UID: \"ce91d9bb-94cd-4bd8-8116-1add3e921236\") " pod="openshift-marketplace/marketplace-operator-79b997595-222qh" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.257785 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce91d9bb-94cd-4bd8-8116-1add3e921236-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-222qh\" (UID: \"ce91d9bb-94cd-4bd8-8116-1add3e921236\") " pod="openshift-marketplace/marketplace-operator-79b997595-222qh" Feb 16 00:12:47 crc kubenswrapper[4698]: E0216 00:12:47.319066 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod194a4f5e_c1fa_4de9_97bb_4b37d7e1ac0c.slice/crio-conmon-24db8f4df12384613787b8650db29affa1ca6d704b141a4b34a86a2b4aa42aca.scope\": RecentStats: unable to find data in memory cache]" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.365052 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qghf2\" (UniqueName: \"kubernetes.io/projected/ce91d9bb-94cd-4bd8-8116-1add3e921236-kube-api-access-qghf2\") pod \"marketplace-operator-79b997595-222qh\" (UID: \"ce91d9bb-94cd-4bd8-8116-1add3e921236\") " pod="openshift-marketplace/marketplace-operator-79b997595-222qh" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.365407 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce91d9bb-94cd-4bd8-8116-1add3e921236-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-222qh\" (UID: \"ce91d9bb-94cd-4bd8-8116-1add3e921236\") " pod="openshift-marketplace/marketplace-operator-79b997595-222qh" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.365464 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce91d9bb-94cd-4bd8-8116-1add3e921236-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-222qh\" (UID: \"ce91d9bb-94cd-4bd8-8116-1add3e921236\") " pod="openshift-marketplace/marketplace-operator-79b997595-222qh" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.367943 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce91d9bb-94cd-4bd8-8116-1add3e921236-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-222qh\" (UID: \"ce91d9bb-94cd-4bd8-8116-1add3e921236\") " pod="openshift-marketplace/marketplace-operator-79b997595-222qh" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.380826 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce91d9bb-94cd-4bd8-8116-1add3e921236-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-222qh\" (UID: \"ce91d9bb-94cd-4bd8-8116-1add3e921236\") " pod="openshift-marketplace/marketplace-operator-79b997595-222qh" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.384653 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qghf2\" (UniqueName: \"kubernetes.io/projected/ce91d9bb-94cd-4bd8-8116-1add3e921236-kube-api-access-qghf2\") pod \"marketplace-operator-79b997595-222qh\" (UID: \"ce91d9bb-94cd-4bd8-8116-1add3e921236\") " pod="openshift-marketplace/marketplace-operator-79b997595-222qh" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.581215 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-222qh" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.713328 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4644p" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.770768 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d741b08c-0e5a-40aa-ba0b-6f11743daa22-catalog-content\") pod \"d741b08c-0e5a-40aa-ba0b-6f11743daa22\" (UID: \"d741b08c-0e5a-40aa-ba0b-6f11743daa22\") " Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.770853 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d741b08c-0e5a-40aa-ba0b-6f11743daa22-utilities\") pod \"d741b08c-0e5a-40aa-ba0b-6f11743daa22\" (UID: \"d741b08c-0e5a-40aa-ba0b-6f11743daa22\") " Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.770872 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mczph\" (UniqueName: \"kubernetes.io/projected/d741b08c-0e5a-40aa-ba0b-6f11743daa22-kube-api-access-mczph\") pod \"d741b08c-0e5a-40aa-ba0b-6f11743daa22\" (UID: \"d741b08c-0e5a-40aa-ba0b-6f11743daa22\") " Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.771782 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d741b08c-0e5a-40aa-ba0b-6f11743daa22-utilities" (OuterVolumeSpecName: "utilities") pod "d741b08c-0e5a-40aa-ba0b-6f11743daa22" (UID: "d741b08c-0e5a-40aa-ba0b-6f11743daa22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.771843 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrkd2" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.776029 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d741b08c-0e5a-40aa-ba0b-6f11743daa22-kube-api-access-mczph" (OuterVolumeSpecName: "kube-api-access-mczph") pod "d741b08c-0e5a-40aa-ba0b-6f11743daa22" (UID: "d741b08c-0e5a-40aa-ba0b-6f11743daa22"). InnerVolumeSpecName "kube-api-access-mczph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.811008 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.829331 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d741b08c-0e5a-40aa-ba0b-6f11743daa22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d741b08c-0e5a-40aa-ba0b-6f11743daa22" (UID: "d741b08c-0e5a-40aa-ba0b-6f11743daa22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.840912 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bwn9c" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.846843 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gqp8r" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.871877 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghqnk\" (UniqueName: \"kubernetes.io/projected/194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c-kube-api-access-ghqnk\") pod \"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c\" (UID: \"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c\") " Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.871940 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c-catalog-content\") pod \"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c\" (UID: \"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c\") " Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.871961 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bb57b35-984c-43d1-8c3e-d3311bb457f4-utilities\") pod \"6bb57b35-984c-43d1-8c3e-d3311bb457f4\" (UID: \"6bb57b35-984c-43d1-8c3e-d3311bb457f4\") " Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.871991 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9679204c-d5b6-489d-9f27-d84d360284ae-marketplace-trusted-ca\") pod \"9679204c-d5b6-489d-9f27-d84d360284ae\" (UID: \"9679204c-d5b6-489d-9f27-d84d360284ae\") " Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.872014 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2921317-af1c-4c00-b999-99897d66aaba-utilities\") pod \"b2921317-af1c-4c00-b999-99897d66aaba\" (UID: \"b2921317-af1c-4c00-b999-99897d66aaba\") " Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.872051 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz2w4\" (UniqueName: \"kubernetes.io/projected/9679204c-d5b6-489d-9f27-d84d360284ae-kube-api-access-qz2w4\") pod \"9679204c-d5b6-489d-9f27-d84d360284ae\" (UID: \"9679204c-d5b6-489d-9f27-d84d360284ae\") " Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.872071 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9679204c-d5b6-489d-9f27-d84d360284ae-marketplace-operator-metrics\") pod \"9679204c-d5b6-489d-9f27-d84d360284ae\" (UID: \"9679204c-d5b6-489d-9f27-d84d360284ae\") " Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.872094 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgg5v\" (UniqueName: \"kubernetes.io/projected/6bb57b35-984c-43d1-8c3e-d3311bb457f4-kube-api-access-pgg5v\") pod \"6bb57b35-984c-43d1-8c3e-d3311bb457f4\" (UID: \"6bb57b35-984c-43d1-8c3e-d3311bb457f4\") " Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.872111 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c-utilities\") pod \"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c\" (UID: \"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c\") " Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.872143 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2921317-af1c-4c00-b999-99897d66aaba-catalog-content\") pod \"b2921317-af1c-4c00-b999-99897d66aaba\" (UID: \"b2921317-af1c-4c00-b999-99897d66aaba\") " Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.872177 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bb57b35-984c-43d1-8c3e-d3311bb457f4-catalog-content\") pod \"6bb57b35-984c-43d1-8c3e-d3311bb457f4\" (UID: \"6bb57b35-984c-43d1-8c3e-d3311bb457f4\") " Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.872203 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbbdz\" (UniqueName: \"kubernetes.io/projected/b2921317-af1c-4c00-b999-99897d66aaba-kube-api-access-zbbdz\") pod \"b2921317-af1c-4c00-b999-99897d66aaba\" (UID: \"b2921317-af1c-4c00-b999-99897d66aaba\") " Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.872417 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d741b08c-0e5a-40aa-ba0b-6f11743daa22-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.872429 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mczph\" (UniqueName: \"kubernetes.io/projected/d741b08c-0e5a-40aa-ba0b-6f11743daa22-kube-api-access-mczph\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.872438 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d741b08c-0e5a-40aa-ba0b-6f11743daa22-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.872870 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bb57b35-984c-43d1-8c3e-d3311bb457f4-utilities" (OuterVolumeSpecName: "utilities") pod "6bb57b35-984c-43d1-8c3e-d3311bb457f4" (UID: "6bb57b35-984c-43d1-8c3e-d3311bb457f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.874276 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2921317-af1c-4c00-b999-99897d66aaba-utilities" (OuterVolumeSpecName: "utilities") pod "b2921317-af1c-4c00-b999-99897d66aaba" (UID: "b2921317-af1c-4c00-b999-99897d66aaba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.874982 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9679204c-d5b6-489d-9f27-d84d360284ae-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "9679204c-d5b6-489d-9f27-d84d360284ae" (UID: "9679204c-d5b6-489d-9f27-d84d360284ae"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.875171 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c-utilities" (OuterVolumeSpecName: "utilities") pod "194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c" (UID: "194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.881198 4698 generic.go:334] "Generic (PLEG): container finished" podID="b2921317-af1c-4c00-b999-99897d66aaba" containerID="c860fe262c3be3ff0ca7f4cf045ff4e8316ef15655b3f38b24d34e11567ff95e" exitCode=0 Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.881321 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrkd2" event={"ID":"b2921317-af1c-4c00-b999-99897d66aaba","Type":"ContainerDied","Data":"c860fe262c3be3ff0ca7f4cf045ff4e8316ef15655b3f38b24d34e11567ff95e"} Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.881356 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrkd2" event={"ID":"b2921317-af1c-4c00-b999-99897d66aaba","Type":"ContainerDied","Data":"7c0268a105712997d221047c10d1cc27e4177f72a683b32afc3b5fa4f7a71f86"} Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.881373 4698 scope.go:117] "RemoveContainer" containerID="c860fe262c3be3ff0ca7f4cf045ff4e8316ef15655b3f38b24d34e11567ff95e" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.882280 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrkd2" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.882935 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9679204c-d5b6-489d-9f27-d84d360284ae-kube-api-access-qz2w4" (OuterVolumeSpecName: "kube-api-access-qz2w4") pod "9679204c-d5b6-489d-9f27-d84d360284ae" (UID: "9679204c-d5b6-489d-9f27-d84d360284ae"). InnerVolumeSpecName "kube-api-access-qz2w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.889002 4698 generic.go:334] "Generic (PLEG): container finished" podID="6bb57b35-984c-43d1-8c3e-d3311bb457f4" containerID="0dc6058affc62865433b4330f35cb918817584b0e5012c302e6000fd40528add" exitCode=0 Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.889055 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqp8r" event={"ID":"6bb57b35-984c-43d1-8c3e-d3311bb457f4","Type":"ContainerDied","Data":"0dc6058affc62865433b4330f35cb918817584b0e5012c302e6000fd40528add"} Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.889077 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqp8r" event={"ID":"6bb57b35-984c-43d1-8c3e-d3311bb457f4","Type":"ContainerDied","Data":"0330dca99d8abf690b64c63398286c45aba5b5f947f2d17c05a49675f11265c4"} Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.889133 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gqp8r" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.905422 4698 scope.go:117] "RemoveContainer" containerID="45305034106ee127a7f49af1e17dfc6014b0749348cbf6df010079b26e03815d" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.918285 4698 generic.go:334] "Generic (PLEG): container finished" podID="d741b08c-0e5a-40aa-ba0b-6f11743daa22" containerID="12b8f04efd700727fe674c220ba29b28bb2a066b842cbf237e10d12d68ecb3db" exitCode=0 Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.918369 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4644p" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.918384 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4644p" event={"ID":"d741b08c-0e5a-40aa-ba0b-6f11743daa22","Type":"ContainerDied","Data":"12b8f04efd700727fe674c220ba29b28bb2a066b842cbf237e10d12d68ecb3db"} Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.918434 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4644p" event={"ID":"d741b08c-0e5a-40aa-ba0b-6f11743daa22","Type":"ContainerDied","Data":"e53acc45b1d612ee86912a9cec36313f34474541f320124aa515011d2e8fbf74"} Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.922598 4698 generic.go:334] "Generic (PLEG): container finished" podID="194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c" containerID="24db8f4df12384613787b8650db29affa1ca6d704b141a4b34a86a2b4aa42aca" exitCode=0 Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.922728 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bwn9c" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.922848 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwn9c" event={"ID":"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c","Type":"ContainerDied","Data":"24db8f4df12384613787b8650db29affa1ca6d704b141a4b34a86a2b4aa42aca"} Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.922912 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwn9c" event={"ID":"194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c","Type":"ContainerDied","Data":"d5a7351cbc74f725b41029b83aa00dc53d976b827913614ce1ea41ee138b166c"} Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.924594 4698 generic.go:334] "Generic (PLEG): container finished" podID="9679204c-d5b6-489d-9f27-d84d360284ae" containerID="611486218e32af6680ed319c28df78ee2293b09dbc07b8e25f3c4a765dbb37ce" exitCode=0 Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.924718 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.924951 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" event={"ID":"9679204c-d5b6-489d-9f27-d84d360284ae","Type":"ContainerDied","Data":"611486218e32af6680ed319c28df78ee2293b09dbc07b8e25f3c4a765dbb37ce"} Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.924974 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wdfct" event={"ID":"9679204c-d5b6-489d-9f27-d84d360284ae","Type":"ContainerDied","Data":"71b2ec4ec2be8c26b30d41840e4d9d5786414bc380ab249a4a3429992b6069cf"} Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.929874 4698 scope.go:117] "RemoveContainer" containerID="ff0d3d0126444094b27ddf6ee9d8c21060f16fc0e5971a89c45c6da7cb0794c7" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.942416 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" event={"ID":"67b049dc-e073-4dee-8f2d-2b0ca9202d93","Type":"ContainerStarted","Data":"ceb7981707126b2edc6265ea0fe1d49bfe635417e6a4e6b9d32862f5a282d940"} Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.942471 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" event={"ID":"67b049dc-e073-4dee-8f2d-2b0ca9202d93","Type":"ContainerStarted","Data":"1168dddd6a1752aeb43dab8b862ffee88466cae4881cd8194d54290a6a4a20a2"} Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.943276 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.945218 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9679204c-d5b6-489d-9f27-d84d360284ae-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "9679204c-d5b6-489d-9f27-d84d360284ae" (UID: "9679204c-d5b6-489d-9f27-d84d360284ae"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.945441 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c-kube-api-access-ghqnk" (OuterVolumeSpecName: "kube-api-access-ghqnk") pod "194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c" (UID: "194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c"). InnerVolumeSpecName "kube-api-access-ghqnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.945697 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bb57b35-984c-43d1-8c3e-d3311bb457f4-kube-api-access-pgg5v" (OuterVolumeSpecName: "kube-api-access-pgg5v") pod "6bb57b35-984c-43d1-8c3e-d3311bb457f4" (UID: "6bb57b35-984c-43d1-8c3e-d3311bb457f4"). InnerVolumeSpecName "kube-api-access-pgg5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.948486 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4644p"] Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.954087 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4644p"] Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.956057 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2921317-af1c-4c00-b999-99897d66aaba-kube-api-access-zbbdz" (OuterVolumeSpecName: "kube-api-access-zbbdz") pod "b2921317-af1c-4c00-b999-99897d66aaba" (UID: "b2921317-af1c-4c00-b999-99897d66aaba"). InnerVolumeSpecName "kube-api-access-zbbdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.959094 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bb57b35-984c-43d1-8c3e-d3311bb457f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6bb57b35-984c-43d1-8c3e-d3311bb457f4" (UID: "6bb57b35-984c-43d1-8c3e-d3311bb457f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.968275 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" podStartSLOduration=1.968253436 podStartE2EDuration="1.968253436s" podCreationTimestamp="2026-02-16 00:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:12:47.962704365 +0000 UTC m=+377.620603137" watchObservedRunningTime="2026-02-16 00:12:47.968253436 +0000 UTC m=+377.626152188" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.970635 4698 scope.go:117] "RemoveContainer" containerID="c860fe262c3be3ff0ca7f4cf045ff4e8316ef15655b3f38b24d34e11567ff95e" Feb 16 00:12:47 crc kubenswrapper[4698]: E0216 00:12:47.971239 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c860fe262c3be3ff0ca7f4cf045ff4e8316ef15655b3f38b24d34e11567ff95e\": container with ID starting with c860fe262c3be3ff0ca7f4cf045ff4e8316ef15655b3f38b24d34e11567ff95e not found: ID does not exist" containerID="c860fe262c3be3ff0ca7f4cf045ff4e8316ef15655b3f38b24d34e11567ff95e" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.971294 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c860fe262c3be3ff0ca7f4cf045ff4e8316ef15655b3f38b24d34e11567ff95e"} err="failed to get container status \"c860fe262c3be3ff0ca7f4cf045ff4e8316ef15655b3f38b24d34e11567ff95e\": rpc error: code = NotFound desc = could not find container \"c860fe262c3be3ff0ca7f4cf045ff4e8316ef15655b3f38b24d34e11567ff95e\": container with ID starting with c860fe262c3be3ff0ca7f4cf045ff4e8316ef15655b3f38b24d34e11567ff95e not found: ID does not exist" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.971328 4698 scope.go:117] "RemoveContainer" containerID="45305034106ee127a7f49af1e17dfc6014b0749348cbf6df010079b26e03815d" Feb 16 00:12:47 crc kubenswrapper[4698]: E0216 00:12:47.971893 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45305034106ee127a7f49af1e17dfc6014b0749348cbf6df010079b26e03815d\": container with ID starting with 45305034106ee127a7f49af1e17dfc6014b0749348cbf6df010079b26e03815d not found: ID does not exist" containerID="45305034106ee127a7f49af1e17dfc6014b0749348cbf6df010079b26e03815d" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.971947 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45305034106ee127a7f49af1e17dfc6014b0749348cbf6df010079b26e03815d"} err="failed to get container status \"45305034106ee127a7f49af1e17dfc6014b0749348cbf6df010079b26e03815d\": rpc error: code = NotFound desc = could not find container \"45305034106ee127a7f49af1e17dfc6014b0749348cbf6df010079b26e03815d\": container with ID starting with 45305034106ee127a7f49af1e17dfc6014b0749348cbf6df010079b26e03815d not found: ID does not exist" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.971976 4698 scope.go:117] "RemoveContainer" containerID="ff0d3d0126444094b27ddf6ee9d8c21060f16fc0e5971a89c45c6da7cb0794c7" Feb 16 00:12:47 crc kubenswrapper[4698]: E0216 00:12:47.972429 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff0d3d0126444094b27ddf6ee9d8c21060f16fc0e5971a89c45c6da7cb0794c7\": container with ID starting with ff0d3d0126444094b27ddf6ee9d8c21060f16fc0e5971a89c45c6da7cb0794c7 not found: ID does not exist" containerID="ff0d3d0126444094b27ddf6ee9d8c21060f16fc0e5971a89c45c6da7cb0794c7" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.972455 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff0d3d0126444094b27ddf6ee9d8c21060f16fc0e5971a89c45c6da7cb0794c7"} err="failed to get container status \"ff0d3d0126444094b27ddf6ee9d8c21060f16fc0e5971a89c45c6da7cb0794c7\": rpc error: code = NotFound desc = could not find container \"ff0d3d0126444094b27ddf6ee9d8c21060f16fc0e5971a89c45c6da7cb0794c7\": container with ID starting with ff0d3d0126444094b27ddf6ee9d8c21060f16fc0e5971a89c45c6da7cb0794c7 not found: ID does not exist" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.972470 4698 scope.go:117] "RemoveContainer" containerID="0dc6058affc62865433b4330f35cb918817584b0e5012c302e6000fd40528add" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.973519 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bb57b35-984c-43d1-8c3e-d3311bb457f4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.973546 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbbdz\" (UniqueName: \"kubernetes.io/projected/b2921317-af1c-4c00-b999-99897d66aaba-kube-api-access-zbbdz\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.973561 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghqnk\" (UniqueName: \"kubernetes.io/projected/194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c-kube-api-access-ghqnk\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.973573 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bb57b35-984c-43d1-8c3e-d3311bb457f4-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.973583 4698 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9679204c-d5b6-489d-9f27-d84d360284ae-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.973592 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2921317-af1c-4c00-b999-99897d66aaba-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.973601 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz2w4\" (UniqueName: \"kubernetes.io/projected/9679204c-d5b6-489d-9f27-d84d360284ae-kube-api-access-qz2w4\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.973615 4698 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9679204c-d5b6-489d-9f27-d84d360284ae-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.973642 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgg5v\" (UniqueName: \"kubernetes.io/projected/6bb57b35-984c-43d1-8c3e-d3311bb457f4-kube-api-access-pgg5v\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.973653 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.998087 4698 scope.go:117] "RemoveContainer" containerID="f301a13b64cb02c50a29807c9edeb9485be2968a76bf691e34f7e96fa4e52fae" Feb 16 00:12:47 crc kubenswrapper[4698]: I0216 00:12:47.999127 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2921317-af1c-4c00-b999-99897d66aaba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2921317-af1c-4c00-b999-99897d66aaba" (UID: "b2921317-af1c-4c00-b999-99897d66aaba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.010473 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c" (UID: "194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.012892 4698 scope.go:117] "RemoveContainer" containerID="615475c5bf369669b9108711e08cd1efacc9d25a20095ed90e640d42d5974ede" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.024434 4698 scope.go:117] "RemoveContainer" containerID="0dc6058affc62865433b4330f35cb918817584b0e5012c302e6000fd40528add" Feb 16 00:12:48 crc kubenswrapper[4698]: E0216 00:12:48.024790 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dc6058affc62865433b4330f35cb918817584b0e5012c302e6000fd40528add\": container with ID starting with 0dc6058affc62865433b4330f35cb918817584b0e5012c302e6000fd40528add not found: ID does not exist" containerID="0dc6058affc62865433b4330f35cb918817584b0e5012c302e6000fd40528add" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.024830 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc6058affc62865433b4330f35cb918817584b0e5012c302e6000fd40528add"} err="failed to get container status \"0dc6058affc62865433b4330f35cb918817584b0e5012c302e6000fd40528add\": rpc error: code = NotFound desc = could not find container \"0dc6058affc62865433b4330f35cb918817584b0e5012c302e6000fd40528add\": container with ID starting with 0dc6058affc62865433b4330f35cb918817584b0e5012c302e6000fd40528add not found: ID does not exist" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.024860 4698 scope.go:117] "RemoveContainer" containerID="f301a13b64cb02c50a29807c9edeb9485be2968a76bf691e34f7e96fa4e52fae" Feb 16 00:12:48 crc kubenswrapper[4698]: E0216 00:12:48.025225 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f301a13b64cb02c50a29807c9edeb9485be2968a76bf691e34f7e96fa4e52fae\": container with ID starting with f301a13b64cb02c50a29807c9edeb9485be2968a76bf691e34f7e96fa4e52fae not found: ID does not exist" containerID="f301a13b64cb02c50a29807c9edeb9485be2968a76bf691e34f7e96fa4e52fae" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.025286 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f301a13b64cb02c50a29807c9edeb9485be2968a76bf691e34f7e96fa4e52fae"} err="failed to get container status \"f301a13b64cb02c50a29807c9edeb9485be2968a76bf691e34f7e96fa4e52fae\": rpc error: code = NotFound desc = could not find container \"f301a13b64cb02c50a29807c9edeb9485be2968a76bf691e34f7e96fa4e52fae\": container with ID starting with f301a13b64cb02c50a29807c9edeb9485be2968a76bf691e34f7e96fa4e52fae not found: ID does not exist" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.025326 4698 scope.go:117] "RemoveContainer" containerID="615475c5bf369669b9108711e08cd1efacc9d25a20095ed90e640d42d5974ede" Feb 16 00:12:48 crc kubenswrapper[4698]: E0216 00:12:48.025675 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"615475c5bf369669b9108711e08cd1efacc9d25a20095ed90e640d42d5974ede\": container with ID starting with 615475c5bf369669b9108711e08cd1efacc9d25a20095ed90e640d42d5974ede not found: ID does not exist" containerID="615475c5bf369669b9108711e08cd1efacc9d25a20095ed90e640d42d5974ede" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.025731 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"615475c5bf369669b9108711e08cd1efacc9d25a20095ed90e640d42d5974ede"} err="failed to get container status \"615475c5bf369669b9108711e08cd1efacc9d25a20095ed90e640d42d5974ede\": rpc error: code = NotFound desc = could not find container \"615475c5bf369669b9108711e08cd1efacc9d25a20095ed90e640d42d5974ede\": container with ID starting with 615475c5bf369669b9108711e08cd1efacc9d25a20095ed90e640d42d5974ede not found: ID does not exist" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.025763 4698 scope.go:117] "RemoveContainer" containerID="12b8f04efd700727fe674c220ba29b28bb2a066b842cbf237e10d12d68ecb3db" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.048363 4698 scope.go:117] "RemoveContainer" containerID="21912a707a1e7ec35c84eed5b1495b7517fdce6bf285a6fd66fb93bea3932e39" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.073413 4698 scope.go:117] "RemoveContainer" containerID="f4c9f9f7c15c78883c85c3362f231c3d2e7afdf0f5194e48f1d7672d0d8e1231" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.075231 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2921317-af1c-4c00-b999-99897d66aaba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.075302 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.076254 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-222qh"] Feb 16 00:12:48 crc kubenswrapper[4698]: W0216 00:12:48.081241 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce91d9bb_94cd_4bd8_8116_1add3e921236.slice/crio-68aa79059e0acab34ce99cb26a2ca4b3e82b2f3ada58ba0fb052d20759fb7ab4 WatchSource:0}: Error finding container 68aa79059e0acab34ce99cb26a2ca4b3e82b2f3ada58ba0fb052d20759fb7ab4: Status 404 returned error can't find the container with id 68aa79059e0acab34ce99cb26a2ca4b3e82b2f3ada58ba0fb052d20759fb7ab4 Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.086858 4698 scope.go:117] "RemoveContainer" containerID="12b8f04efd700727fe674c220ba29b28bb2a066b842cbf237e10d12d68ecb3db" Feb 16 00:12:48 crc kubenswrapper[4698]: E0216 00:12:48.088267 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12b8f04efd700727fe674c220ba29b28bb2a066b842cbf237e10d12d68ecb3db\": container with ID starting with 12b8f04efd700727fe674c220ba29b28bb2a066b842cbf237e10d12d68ecb3db not found: ID does not exist" containerID="12b8f04efd700727fe674c220ba29b28bb2a066b842cbf237e10d12d68ecb3db" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.088303 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b8f04efd700727fe674c220ba29b28bb2a066b842cbf237e10d12d68ecb3db"} err="failed to get container status \"12b8f04efd700727fe674c220ba29b28bb2a066b842cbf237e10d12d68ecb3db\": rpc error: code = NotFound desc = could not find container \"12b8f04efd700727fe674c220ba29b28bb2a066b842cbf237e10d12d68ecb3db\": container with ID starting with 12b8f04efd700727fe674c220ba29b28bb2a066b842cbf237e10d12d68ecb3db not found: ID does not exist" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.088339 4698 scope.go:117] "RemoveContainer" containerID="21912a707a1e7ec35c84eed5b1495b7517fdce6bf285a6fd66fb93bea3932e39" Feb 16 00:12:48 crc kubenswrapper[4698]: E0216 00:12:48.088959 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21912a707a1e7ec35c84eed5b1495b7517fdce6bf285a6fd66fb93bea3932e39\": container with ID starting with 21912a707a1e7ec35c84eed5b1495b7517fdce6bf285a6fd66fb93bea3932e39 not found: ID does not exist" containerID="21912a707a1e7ec35c84eed5b1495b7517fdce6bf285a6fd66fb93bea3932e39" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.088991 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21912a707a1e7ec35c84eed5b1495b7517fdce6bf285a6fd66fb93bea3932e39"} err="failed to get container status \"21912a707a1e7ec35c84eed5b1495b7517fdce6bf285a6fd66fb93bea3932e39\": rpc error: code = NotFound desc = could not find container \"21912a707a1e7ec35c84eed5b1495b7517fdce6bf285a6fd66fb93bea3932e39\": container with ID starting with 21912a707a1e7ec35c84eed5b1495b7517fdce6bf285a6fd66fb93bea3932e39 not found: ID does not exist" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.089007 4698 scope.go:117] "RemoveContainer" containerID="f4c9f9f7c15c78883c85c3362f231c3d2e7afdf0f5194e48f1d7672d0d8e1231" Feb 16 00:12:48 crc kubenswrapper[4698]: E0216 00:12:48.089301 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4c9f9f7c15c78883c85c3362f231c3d2e7afdf0f5194e48f1d7672d0d8e1231\": container with ID starting with f4c9f9f7c15c78883c85c3362f231c3d2e7afdf0f5194e48f1d7672d0d8e1231 not found: ID does not exist" containerID="f4c9f9f7c15c78883c85c3362f231c3d2e7afdf0f5194e48f1d7672d0d8e1231" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.089319 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4c9f9f7c15c78883c85c3362f231c3d2e7afdf0f5194e48f1d7672d0d8e1231"} err="failed to get container status \"f4c9f9f7c15c78883c85c3362f231c3d2e7afdf0f5194e48f1d7672d0d8e1231\": rpc error: code = NotFound desc = could not find container \"f4c9f9f7c15c78883c85c3362f231c3d2e7afdf0f5194e48f1d7672d0d8e1231\": container with ID starting with f4c9f9f7c15c78883c85c3362f231c3d2e7afdf0f5194e48f1d7672d0d8e1231 not found: ID does not exist" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.089331 4698 scope.go:117] "RemoveContainer" containerID="24db8f4df12384613787b8650db29affa1ca6d704b141a4b34a86a2b4aa42aca" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.106119 4698 scope.go:117] "RemoveContainer" containerID="07b4cd140b99831c83628322b31578af0b71a62549bfc2508bdb2175e1d8764a" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.125920 4698 scope.go:117] "RemoveContainer" containerID="95e53ced78ffc4fc72d8abfbe9542921536e659be7df7c3d05145f1152f529fb" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.154912 4698 scope.go:117] "RemoveContainer" containerID="24db8f4df12384613787b8650db29affa1ca6d704b141a4b34a86a2b4aa42aca" Feb 16 00:12:48 crc kubenswrapper[4698]: E0216 00:12:48.160422 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24db8f4df12384613787b8650db29affa1ca6d704b141a4b34a86a2b4aa42aca\": container with ID starting with 24db8f4df12384613787b8650db29affa1ca6d704b141a4b34a86a2b4aa42aca not found: ID does not exist" containerID="24db8f4df12384613787b8650db29affa1ca6d704b141a4b34a86a2b4aa42aca" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.160488 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24db8f4df12384613787b8650db29affa1ca6d704b141a4b34a86a2b4aa42aca"} err="failed to get container status \"24db8f4df12384613787b8650db29affa1ca6d704b141a4b34a86a2b4aa42aca\": rpc error: code = NotFound desc = could not find container \"24db8f4df12384613787b8650db29affa1ca6d704b141a4b34a86a2b4aa42aca\": container with ID starting with 24db8f4df12384613787b8650db29affa1ca6d704b141a4b34a86a2b4aa42aca not found: ID does not exist" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.160535 4698 scope.go:117] "RemoveContainer" containerID="07b4cd140b99831c83628322b31578af0b71a62549bfc2508bdb2175e1d8764a" Feb 16 00:12:48 crc kubenswrapper[4698]: E0216 00:12:48.164648 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07b4cd140b99831c83628322b31578af0b71a62549bfc2508bdb2175e1d8764a\": container with ID starting with 07b4cd140b99831c83628322b31578af0b71a62549bfc2508bdb2175e1d8764a not found: ID does not exist" containerID="07b4cd140b99831c83628322b31578af0b71a62549bfc2508bdb2175e1d8764a" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.165411 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07b4cd140b99831c83628322b31578af0b71a62549bfc2508bdb2175e1d8764a"} err="failed to get container status \"07b4cd140b99831c83628322b31578af0b71a62549bfc2508bdb2175e1d8764a\": rpc error: code = NotFound desc = could not find container \"07b4cd140b99831c83628322b31578af0b71a62549bfc2508bdb2175e1d8764a\": container with ID starting with 07b4cd140b99831c83628322b31578af0b71a62549bfc2508bdb2175e1d8764a not found: ID does not exist" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.165981 4698 scope.go:117] "RemoveContainer" containerID="95e53ced78ffc4fc72d8abfbe9542921536e659be7df7c3d05145f1152f529fb" Feb 16 00:12:48 crc kubenswrapper[4698]: E0216 00:12:48.169305 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95e53ced78ffc4fc72d8abfbe9542921536e659be7df7c3d05145f1152f529fb\": container with ID starting with 95e53ced78ffc4fc72d8abfbe9542921536e659be7df7c3d05145f1152f529fb not found: ID does not exist" containerID="95e53ced78ffc4fc72d8abfbe9542921536e659be7df7c3d05145f1152f529fb" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.169370 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95e53ced78ffc4fc72d8abfbe9542921536e659be7df7c3d05145f1152f529fb"} err="failed to get container status \"95e53ced78ffc4fc72d8abfbe9542921536e659be7df7c3d05145f1152f529fb\": rpc error: code = NotFound desc = could not find container \"95e53ced78ffc4fc72d8abfbe9542921536e659be7df7c3d05145f1152f529fb\": container with ID starting with 95e53ced78ffc4fc72d8abfbe9542921536e659be7df7c3d05145f1152f529fb not found: ID does not exist" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.169413 4698 scope.go:117] "RemoveContainer" containerID="611486218e32af6680ed319c28df78ee2293b09dbc07b8e25f3c4a765dbb37ce" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.185310 4698 scope.go:117] "RemoveContainer" containerID="c720b800721bdd8f1712d4e0d15bc8527d36ea11c701fe5150fd9f4006c94cbe" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.209403 4698 scope.go:117] "RemoveContainer" containerID="611486218e32af6680ed319c28df78ee2293b09dbc07b8e25f3c4a765dbb37ce" Feb 16 00:12:48 crc kubenswrapper[4698]: E0216 00:12:48.210087 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"611486218e32af6680ed319c28df78ee2293b09dbc07b8e25f3c4a765dbb37ce\": container with ID starting with 611486218e32af6680ed319c28df78ee2293b09dbc07b8e25f3c4a765dbb37ce not found: ID does not exist" containerID="611486218e32af6680ed319c28df78ee2293b09dbc07b8e25f3c4a765dbb37ce" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.210168 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"611486218e32af6680ed319c28df78ee2293b09dbc07b8e25f3c4a765dbb37ce"} err="failed to get container status \"611486218e32af6680ed319c28df78ee2293b09dbc07b8e25f3c4a765dbb37ce\": rpc error: code = NotFound desc = could not find container \"611486218e32af6680ed319c28df78ee2293b09dbc07b8e25f3c4a765dbb37ce\": container with ID starting with 611486218e32af6680ed319c28df78ee2293b09dbc07b8e25f3c4a765dbb37ce not found: ID does not exist" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.210222 4698 scope.go:117] "RemoveContainer" containerID="c720b800721bdd8f1712d4e0d15bc8527d36ea11c701fe5150fd9f4006c94cbe" Feb 16 00:12:48 crc kubenswrapper[4698]: E0216 00:12:48.212788 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c720b800721bdd8f1712d4e0d15bc8527d36ea11c701fe5150fd9f4006c94cbe\": container with ID starting with c720b800721bdd8f1712d4e0d15bc8527d36ea11c701fe5150fd9f4006c94cbe not found: ID does not exist" containerID="c720b800721bdd8f1712d4e0d15bc8527d36ea11c701fe5150fd9f4006c94cbe" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.212819 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c720b800721bdd8f1712d4e0d15bc8527d36ea11c701fe5150fd9f4006c94cbe"} err="failed to get container status \"c720b800721bdd8f1712d4e0d15bc8527d36ea11c701fe5150fd9f4006c94cbe\": rpc error: code = NotFound desc = could not find container \"c720b800721bdd8f1712d4e0d15bc8527d36ea11c701fe5150fd9f4006c94cbe\": container with ID starting with c720b800721bdd8f1712d4e0d15bc8527d36ea11c701fe5150fd9f4006c94cbe not found: ID does not exist" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.223314 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gqp8r"] Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.248374 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gqp8r"] Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.254296 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vrkd2"] Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.257293 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vrkd2"] Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.296063 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bwn9c"] Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.306865 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bwn9c"] Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.310874 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wdfct"] Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.314580 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wdfct"] Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.953221 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-222qh" event={"ID":"ce91d9bb-94cd-4bd8-8116-1add3e921236","Type":"ContainerStarted","Data":"6fae26d79f34584da9f1643a4f59f2f19fa4bcaf162a8a4b9afd79bf3576f830"} Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.953265 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-222qh" event={"ID":"ce91d9bb-94cd-4bd8-8116-1add3e921236","Type":"ContainerStarted","Data":"68aa79059e0acab34ce99cb26a2ca4b3e82b2f3ada58ba0fb052d20759fb7ab4"} Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.953974 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-222qh" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.961317 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-222qh" Feb 16 00:12:48 crc kubenswrapper[4698]: I0216 00:12:48.974349 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-222qh" podStartSLOduration=1.9743314189999999 podStartE2EDuration="1.974331419s" podCreationTimestamp="2026-02-16 00:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:12:48.971774006 +0000 UTC m=+378.629672798" watchObservedRunningTime="2026-02-16 00:12:48.974331419 +0000 UTC m=+378.632230181" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.241670 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c" path="/var/lib/kubelet/pods/194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c/volumes" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.242852 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bb57b35-984c-43d1-8c3e-d3311bb457f4" path="/var/lib/kubelet/pods/6bb57b35-984c-43d1-8c3e-d3311bb457f4/volumes" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.243658 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9679204c-d5b6-489d-9f27-d84d360284ae" path="/var/lib/kubelet/pods/9679204c-d5b6-489d-9f27-d84d360284ae/volumes" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.244840 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2921317-af1c-4c00-b999-99897d66aaba" path="/var/lib/kubelet/pods/b2921317-af1c-4c00-b999-99897d66aaba/volumes" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.245549 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d741b08c-0e5a-40aa-ba0b-6f11743daa22" path="/var/lib/kubelet/pods/d741b08c-0e5a-40aa-ba0b-6f11743daa22/volumes" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.332444 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w9lwt"] Feb 16 00:12:49 crc kubenswrapper[4698]: E0216 00:12:49.332903 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2921317-af1c-4c00-b999-99897d66aaba" containerName="extract-utilities" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.332939 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2921317-af1c-4c00-b999-99897d66aaba" containerName="extract-utilities" Feb 16 00:12:49 crc kubenswrapper[4698]: E0216 00:12:49.332952 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb57b35-984c-43d1-8c3e-d3311bb457f4" containerName="extract-utilities" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.332962 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb57b35-984c-43d1-8c3e-d3311bb457f4" containerName="extract-utilities" Feb 16 00:12:49 crc kubenswrapper[4698]: E0216 00:12:49.332976 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb57b35-984c-43d1-8c3e-d3311bb457f4" containerName="extract-content" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.332984 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb57b35-984c-43d1-8c3e-d3311bb457f4" containerName="extract-content" Feb 16 00:12:49 crc kubenswrapper[4698]: E0216 00:12:49.332995 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d741b08c-0e5a-40aa-ba0b-6f11743daa22" containerName="extract-utilities" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.333003 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d741b08c-0e5a-40aa-ba0b-6f11743daa22" containerName="extract-utilities" Feb 16 00:12:49 crc kubenswrapper[4698]: E0216 00:12:49.333018 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c" containerName="extract-utilities" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.333026 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c" containerName="extract-utilities" Feb 16 00:12:49 crc kubenswrapper[4698]: E0216 00:12:49.333034 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d741b08c-0e5a-40aa-ba0b-6f11743daa22" containerName="extract-content" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.333042 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d741b08c-0e5a-40aa-ba0b-6f11743daa22" containerName="extract-content" Feb 16 00:12:49 crc kubenswrapper[4698]: E0216 00:12:49.333053 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9679204c-d5b6-489d-9f27-d84d360284ae" containerName="marketplace-operator" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.333061 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9679204c-d5b6-489d-9f27-d84d360284ae" containerName="marketplace-operator" Feb 16 00:12:49 crc kubenswrapper[4698]: E0216 00:12:49.333075 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2921317-af1c-4c00-b999-99897d66aaba" containerName="registry-server" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.333082 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2921317-af1c-4c00-b999-99897d66aaba" containerName="registry-server" Feb 16 00:12:49 crc kubenswrapper[4698]: E0216 00:12:49.333092 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9679204c-d5b6-489d-9f27-d84d360284ae" containerName="marketplace-operator" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.333100 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9679204c-d5b6-489d-9f27-d84d360284ae" containerName="marketplace-operator" Feb 16 00:12:49 crc kubenswrapper[4698]: E0216 00:12:49.333111 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2921317-af1c-4c00-b999-99897d66aaba" containerName="extract-content" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.333118 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2921317-af1c-4c00-b999-99897d66aaba" containerName="extract-content" Feb 16 00:12:49 crc kubenswrapper[4698]: E0216 00:12:49.333130 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d741b08c-0e5a-40aa-ba0b-6f11743daa22" containerName="registry-server" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.333140 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d741b08c-0e5a-40aa-ba0b-6f11743daa22" containerName="registry-server" Feb 16 00:12:49 crc kubenswrapper[4698]: E0216 00:12:49.333157 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c" containerName="registry-server" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.333166 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c" containerName="registry-server" Feb 16 00:12:49 crc kubenswrapper[4698]: E0216 00:12:49.333176 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c" containerName="extract-content" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.333184 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c" containerName="extract-content" Feb 16 00:12:49 crc kubenswrapper[4698]: E0216 00:12:49.333193 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb57b35-984c-43d1-8c3e-d3311bb457f4" containerName="registry-server" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.333200 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb57b35-984c-43d1-8c3e-d3311bb457f4" containerName="registry-server" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.333338 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9679204c-d5b6-489d-9f27-d84d360284ae" containerName="marketplace-operator" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.333350 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="194a4f5e-c1fa-4de9-97bb-4b37d7e1ac0c" containerName="registry-server" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.333361 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d741b08c-0e5a-40aa-ba0b-6f11743daa22" containerName="registry-server" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.333379 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2921317-af1c-4c00-b999-99897d66aaba" containerName="registry-server" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.333389 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9679204c-d5b6-489d-9f27-d84d360284ae" containerName="marketplace-operator" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.333403 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bb57b35-984c-43d1-8c3e-d3311bb457f4" containerName="registry-server" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.334523 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9lwt" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.338352 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.341001 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9lwt"] Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.401308 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15790639-5955-4eca-91d8-aab72bf25943-catalog-content\") pod \"redhat-marketplace-w9lwt\" (UID: \"15790639-5955-4eca-91d8-aab72bf25943\") " pod="openshift-marketplace/redhat-marketplace-w9lwt" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.401445 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15790639-5955-4eca-91d8-aab72bf25943-utilities\") pod \"redhat-marketplace-w9lwt\" (UID: \"15790639-5955-4eca-91d8-aab72bf25943\") " pod="openshift-marketplace/redhat-marketplace-w9lwt" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.401504 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm6vl\" (UniqueName: \"kubernetes.io/projected/15790639-5955-4eca-91d8-aab72bf25943-kube-api-access-mm6vl\") pod \"redhat-marketplace-w9lwt\" (UID: \"15790639-5955-4eca-91d8-aab72bf25943\") " pod="openshift-marketplace/redhat-marketplace-w9lwt" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.503315 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15790639-5955-4eca-91d8-aab72bf25943-catalog-content\") pod \"redhat-marketplace-w9lwt\" (UID: \"15790639-5955-4eca-91d8-aab72bf25943\") " pod="openshift-marketplace/redhat-marketplace-w9lwt" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.503481 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15790639-5955-4eca-91d8-aab72bf25943-utilities\") pod \"redhat-marketplace-w9lwt\" (UID: \"15790639-5955-4eca-91d8-aab72bf25943\") " pod="openshift-marketplace/redhat-marketplace-w9lwt" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.503547 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm6vl\" (UniqueName: \"kubernetes.io/projected/15790639-5955-4eca-91d8-aab72bf25943-kube-api-access-mm6vl\") pod \"redhat-marketplace-w9lwt\" (UID: \"15790639-5955-4eca-91d8-aab72bf25943\") " pod="openshift-marketplace/redhat-marketplace-w9lwt" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.504201 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15790639-5955-4eca-91d8-aab72bf25943-catalog-content\") pod \"redhat-marketplace-w9lwt\" (UID: \"15790639-5955-4eca-91d8-aab72bf25943\") " pod="openshift-marketplace/redhat-marketplace-w9lwt" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.504533 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15790639-5955-4eca-91d8-aab72bf25943-utilities\") pod \"redhat-marketplace-w9lwt\" (UID: \"15790639-5955-4eca-91d8-aab72bf25943\") " pod="openshift-marketplace/redhat-marketplace-w9lwt" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.520838 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pw888"] Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.522269 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pw888" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.525193 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.534375 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm6vl\" (UniqueName: \"kubernetes.io/projected/15790639-5955-4eca-91d8-aab72bf25943-kube-api-access-mm6vl\") pod \"redhat-marketplace-w9lwt\" (UID: \"15790639-5955-4eca-91d8-aab72bf25943\") " pod="openshift-marketplace/redhat-marketplace-w9lwt" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.542011 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pw888"] Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.605318 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w59pw\" (UniqueName: \"kubernetes.io/projected/3b2a749a-0657-4670-8028-451cde6de012-kube-api-access-w59pw\") pod \"redhat-operators-pw888\" (UID: \"3b2a749a-0657-4670-8028-451cde6de012\") " pod="openshift-marketplace/redhat-operators-pw888" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.605395 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b2a749a-0657-4670-8028-451cde6de012-catalog-content\") pod \"redhat-operators-pw888\" (UID: \"3b2a749a-0657-4670-8028-451cde6de012\") " pod="openshift-marketplace/redhat-operators-pw888" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.605424 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b2a749a-0657-4670-8028-451cde6de012-utilities\") pod \"redhat-operators-pw888\" (UID: \"3b2a749a-0657-4670-8028-451cde6de012\") " pod="openshift-marketplace/redhat-operators-pw888" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.652504 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9lwt" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.706999 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w59pw\" (UniqueName: \"kubernetes.io/projected/3b2a749a-0657-4670-8028-451cde6de012-kube-api-access-w59pw\") pod \"redhat-operators-pw888\" (UID: \"3b2a749a-0657-4670-8028-451cde6de012\") " pod="openshift-marketplace/redhat-operators-pw888" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.707106 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b2a749a-0657-4670-8028-451cde6de012-catalog-content\") pod \"redhat-operators-pw888\" (UID: \"3b2a749a-0657-4670-8028-451cde6de012\") " pod="openshift-marketplace/redhat-operators-pw888" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.707150 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b2a749a-0657-4670-8028-451cde6de012-utilities\") pod \"redhat-operators-pw888\" (UID: \"3b2a749a-0657-4670-8028-451cde6de012\") " pod="openshift-marketplace/redhat-operators-pw888" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.707670 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b2a749a-0657-4670-8028-451cde6de012-catalog-content\") pod \"redhat-operators-pw888\" (UID: \"3b2a749a-0657-4670-8028-451cde6de012\") " pod="openshift-marketplace/redhat-operators-pw888" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.707762 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b2a749a-0657-4670-8028-451cde6de012-utilities\") pod \"redhat-operators-pw888\" (UID: \"3b2a749a-0657-4670-8028-451cde6de012\") " pod="openshift-marketplace/redhat-operators-pw888" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.725654 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w59pw\" (UniqueName: \"kubernetes.io/projected/3b2a749a-0657-4670-8028-451cde6de012-kube-api-access-w59pw\") pod \"redhat-operators-pw888\" (UID: \"3b2a749a-0657-4670-8028-451cde6de012\") " pod="openshift-marketplace/redhat-operators-pw888" Feb 16 00:12:49 crc kubenswrapper[4698]: I0216 00:12:49.865592 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pw888" Feb 16 00:12:50 crc kubenswrapper[4698]: I0216 00:12:50.063809 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9lwt"] Feb 16 00:12:50 crc kubenswrapper[4698]: I0216 00:12:50.292814 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pw888"] Feb 16 00:12:50 crc kubenswrapper[4698]: W0216 00:12:50.314448 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b2a749a_0657_4670_8028_451cde6de012.slice/crio-9937daaf655633b97598fb2557c84ef6dfe7648eb236f758c39bb96095be17e3 WatchSource:0}: Error finding container 9937daaf655633b97598fb2557c84ef6dfe7648eb236f758c39bb96095be17e3: Status 404 returned error can't find the container with id 9937daaf655633b97598fb2557c84ef6dfe7648eb236f758c39bb96095be17e3 Feb 16 00:12:50 crc kubenswrapper[4698]: I0216 00:12:50.983744 4698 generic.go:334] "Generic (PLEG): container finished" podID="15790639-5955-4eca-91d8-aab72bf25943" containerID="4c840b8f4eb9b5edecc524236ccf130bc7932643d8f5bd47ba2ebe97f11977b5" exitCode=0 Feb 16 00:12:50 crc kubenswrapper[4698]: I0216 00:12:50.983924 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9lwt" event={"ID":"15790639-5955-4eca-91d8-aab72bf25943","Type":"ContainerDied","Data":"4c840b8f4eb9b5edecc524236ccf130bc7932643d8f5bd47ba2ebe97f11977b5"} Feb 16 00:12:50 crc kubenswrapper[4698]: I0216 00:12:50.984398 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9lwt" event={"ID":"15790639-5955-4eca-91d8-aab72bf25943","Type":"ContainerStarted","Data":"680087beb873d245a50d8063e6eca098f7b53e4278183200fec590327fc184b6"} Feb 16 00:12:50 crc kubenswrapper[4698]: I0216 00:12:50.987268 4698 generic.go:334] "Generic (PLEG): container finished" podID="3b2a749a-0657-4670-8028-451cde6de012" containerID="a1cbdecc6a67315a980b5ad08f0e3030e1f0441a4ae34e04c4e7ae195cf0e66b" exitCode=0 Feb 16 00:12:50 crc kubenswrapper[4698]: I0216 00:12:50.988240 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw888" event={"ID":"3b2a749a-0657-4670-8028-451cde6de012","Type":"ContainerDied","Data":"a1cbdecc6a67315a980b5ad08f0e3030e1f0441a4ae34e04c4e7ae195cf0e66b"} Feb 16 00:12:50 crc kubenswrapper[4698]: I0216 00:12:50.988356 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw888" event={"ID":"3b2a749a-0657-4670-8028-451cde6de012","Type":"ContainerStarted","Data":"9937daaf655633b97598fb2557c84ef6dfe7648eb236f758c39bb96095be17e3"} Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.241566 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9n7sm"] Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.243165 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9n7sm" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.248934 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.255726 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bmzl7"] Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.257751 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmzl7" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.263764 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.295135 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bmzl7"] Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.299982 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9n7sm"] Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.357652 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4f60670-6117-4fd1-b177-bd5b801a669f-catalog-content\") pod \"certified-operators-bmzl7\" (UID: \"e4f60670-6117-4fd1-b177-bd5b801a669f\") " pod="openshift-marketplace/certified-operators-bmzl7" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.357728 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d6a6915-3d48-4029-a022-26658bb88374-catalog-content\") pod \"community-operators-9n7sm\" (UID: \"2d6a6915-3d48-4029-a022-26658bb88374\") " pod="openshift-marketplace/community-operators-9n7sm" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.357859 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d6a6915-3d48-4029-a022-26658bb88374-utilities\") pod \"community-operators-9n7sm\" (UID: \"2d6a6915-3d48-4029-a022-26658bb88374\") " pod="openshift-marketplace/community-operators-9n7sm" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.358140 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4f60670-6117-4fd1-b177-bd5b801a669f-utilities\") pod \"certified-operators-bmzl7\" (UID: \"e4f60670-6117-4fd1-b177-bd5b801a669f\") " pod="openshift-marketplace/certified-operators-bmzl7" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.358212 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m7qc\" (UniqueName: \"kubernetes.io/projected/e4f60670-6117-4fd1-b177-bd5b801a669f-kube-api-access-6m7qc\") pod \"certified-operators-bmzl7\" (UID: \"e4f60670-6117-4fd1-b177-bd5b801a669f\") " pod="openshift-marketplace/certified-operators-bmzl7" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.358280 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6gkp\" (UniqueName: \"kubernetes.io/projected/2d6a6915-3d48-4029-a022-26658bb88374-kube-api-access-m6gkp\") pod \"community-operators-9n7sm\" (UID: \"2d6a6915-3d48-4029-a022-26658bb88374\") " pod="openshift-marketplace/community-operators-9n7sm" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.459940 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4f60670-6117-4fd1-b177-bd5b801a669f-catalog-content\") pod \"certified-operators-bmzl7\" (UID: \"e4f60670-6117-4fd1-b177-bd5b801a669f\") " pod="openshift-marketplace/certified-operators-bmzl7" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.460076 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d6a6915-3d48-4029-a022-26658bb88374-catalog-content\") pod \"community-operators-9n7sm\" (UID: \"2d6a6915-3d48-4029-a022-26658bb88374\") " pod="openshift-marketplace/community-operators-9n7sm" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.460137 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d6a6915-3d48-4029-a022-26658bb88374-utilities\") pod \"community-operators-9n7sm\" (UID: \"2d6a6915-3d48-4029-a022-26658bb88374\") " pod="openshift-marketplace/community-operators-9n7sm" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.460179 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4f60670-6117-4fd1-b177-bd5b801a669f-utilities\") pod \"certified-operators-bmzl7\" (UID: \"e4f60670-6117-4fd1-b177-bd5b801a669f\") " pod="openshift-marketplace/certified-operators-bmzl7" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.460212 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m7qc\" (UniqueName: \"kubernetes.io/projected/e4f60670-6117-4fd1-b177-bd5b801a669f-kube-api-access-6m7qc\") pod \"certified-operators-bmzl7\" (UID: \"e4f60670-6117-4fd1-b177-bd5b801a669f\") " pod="openshift-marketplace/certified-operators-bmzl7" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.460249 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6gkp\" (UniqueName: \"kubernetes.io/projected/2d6a6915-3d48-4029-a022-26658bb88374-kube-api-access-m6gkp\") pod \"community-operators-9n7sm\" (UID: \"2d6a6915-3d48-4029-a022-26658bb88374\") " pod="openshift-marketplace/community-operators-9n7sm" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.460579 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4f60670-6117-4fd1-b177-bd5b801a669f-catalog-content\") pod \"certified-operators-bmzl7\" (UID: \"e4f60670-6117-4fd1-b177-bd5b801a669f\") " pod="openshift-marketplace/certified-operators-bmzl7" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.460898 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d6a6915-3d48-4029-a022-26658bb88374-catalog-content\") pod \"community-operators-9n7sm\" (UID: \"2d6a6915-3d48-4029-a022-26658bb88374\") " pod="openshift-marketplace/community-operators-9n7sm" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.461241 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d6a6915-3d48-4029-a022-26658bb88374-utilities\") pod \"community-operators-9n7sm\" (UID: \"2d6a6915-3d48-4029-a022-26658bb88374\") " pod="openshift-marketplace/community-operators-9n7sm" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.461582 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4f60670-6117-4fd1-b177-bd5b801a669f-utilities\") pod \"certified-operators-bmzl7\" (UID: \"e4f60670-6117-4fd1-b177-bd5b801a669f\") " pod="openshift-marketplace/certified-operators-bmzl7" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.481555 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6gkp\" (UniqueName: \"kubernetes.io/projected/2d6a6915-3d48-4029-a022-26658bb88374-kube-api-access-m6gkp\") pod \"community-operators-9n7sm\" (UID: \"2d6a6915-3d48-4029-a022-26658bb88374\") " pod="openshift-marketplace/community-operators-9n7sm" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.509038 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m7qc\" (UniqueName: \"kubernetes.io/projected/e4f60670-6117-4fd1-b177-bd5b801a669f-kube-api-access-6m7qc\") pod \"certified-operators-bmzl7\" (UID: \"e4f60670-6117-4fd1-b177-bd5b801a669f\") " pod="openshift-marketplace/certified-operators-bmzl7" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.561139 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9n7sm" Feb 16 00:12:52 crc kubenswrapper[4698]: I0216 00:12:52.583825 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmzl7" Feb 16 00:12:53 crc kubenswrapper[4698]: I0216 00:12:53.024504 4698 generic.go:334] "Generic (PLEG): container finished" podID="15790639-5955-4eca-91d8-aab72bf25943" containerID="796586abf5d78adfdb9ca73a99a03942a0ef3a427734c96d165cbae0b8cf405c" exitCode=0 Feb 16 00:12:53 crc kubenswrapper[4698]: I0216 00:12:53.024975 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9lwt" event={"ID":"15790639-5955-4eca-91d8-aab72bf25943","Type":"ContainerDied","Data":"796586abf5d78adfdb9ca73a99a03942a0ef3a427734c96d165cbae0b8cf405c"} Feb 16 00:12:53 crc kubenswrapper[4698]: I0216 00:12:53.027567 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9n7sm"] Feb 16 00:12:53 crc kubenswrapper[4698]: I0216 00:12:53.041550 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw888" event={"ID":"3b2a749a-0657-4670-8028-451cde6de012","Type":"ContainerStarted","Data":"06f3f89ae6b3bc2236c197776960615cbd27e87c3136cab0f2562444795ad0d6"} Feb 16 00:12:53 crc kubenswrapper[4698]: I0216 00:12:53.130898 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bmzl7"] Feb 16 00:12:53 crc kubenswrapper[4698]: W0216 00:12:53.140950 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4f60670_6117_4fd1_b177_bd5b801a669f.slice/crio-81991304b43fe013f798f355c7533f0b62f839ea84f3f2fc9557dddad770c41b WatchSource:0}: Error finding container 81991304b43fe013f798f355c7533f0b62f839ea84f3f2fc9557dddad770c41b: Status 404 returned error can't find the container with id 81991304b43fe013f798f355c7533f0b62f839ea84f3f2fc9557dddad770c41b Feb 16 00:12:54 crc kubenswrapper[4698]: I0216 00:12:54.049393 4698 generic.go:334] "Generic (PLEG): container finished" podID="e4f60670-6117-4fd1-b177-bd5b801a669f" containerID="10dd5956c02930405142a06061a16288aecac9ad7755f75281001385d549dabf" exitCode=0 Feb 16 00:12:54 crc kubenswrapper[4698]: I0216 00:12:54.050102 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmzl7" event={"ID":"e4f60670-6117-4fd1-b177-bd5b801a669f","Type":"ContainerDied","Data":"10dd5956c02930405142a06061a16288aecac9ad7755f75281001385d549dabf"} Feb 16 00:12:54 crc kubenswrapper[4698]: I0216 00:12:54.050341 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmzl7" event={"ID":"e4f60670-6117-4fd1-b177-bd5b801a669f","Type":"ContainerStarted","Data":"81991304b43fe013f798f355c7533f0b62f839ea84f3f2fc9557dddad770c41b"} Feb 16 00:12:54 crc kubenswrapper[4698]: I0216 00:12:54.053507 4698 generic.go:334] "Generic (PLEG): container finished" podID="3b2a749a-0657-4670-8028-451cde6de012" containerID="06f3f89ae6b3bc2236c197776960615cbd27e87c3136cab0f2562444795ad0d6" exitCode=0 Feb 16 00:12:54 crc kubenswrapper[4698]: I0216 00:12:54.053566 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw888" event={"ID":"3b2a749a-0657-4670-8028-451cde6de012","Type":"ContainerDied","Data":"06f3f89ae6b3bc2236c197776960615cbd27e87c3136cab0f2562444795ad0d6"} Feb 16 00:12:54 crc kubenswrapper[4698]: I0216 00:12:54.060416 4698 generic.go:334] "Generic (PLEG): container finished" podID="2d6a6915-3d48-4029-a022-26658bb88374" containerID="a80e0b9532e464a8cd65d7a468b8b91f63efe89224d0d225cb98f6546f1d504a" exitCode=0 Feb 16 00:12:54 crc kubenswrapper[4698]: I0216 00:12:54.060493 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9n7sm" event={"ID":"2d6a6915-3d48-4029-a022-26658bb88374","Type":"ContainerDied","Data":"a80e0b9532e464a8cd65d7a468b8b91f63efe89224d0d225cb98f6546f1d504a"} Feb 16 00:12:54 crc kubenswrapper[4698]: I0216 00:12:54.060528 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9n7sm" event={"ID":"2d6a6915-3d48-4029-a022-26658bb88374","Type":"ContainerStarted","Data":"4bce928a6904a3311095532683a862317056522d0d70b4f992d542e0394a2a4d"} Feb 16 00:12:54 crc kubenswrapper[4698]: I0216 00:12:54.065296 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9lwt" event={"ID":"15790639-5955-4eca-91d8-aab72bf25943","Type":"ContainerStarted","Data":"9b6cdc59739da10d4128cf1d92336194eb9682b30b18ce49011967bece1d58aa"} Feb 16 00:12:54 crc kubenswrapper[4698]: I0216 00:12:54.125705 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w9lwt" podStartSLOduration=2.626245129 podStartE2EDuration="5.125590637s" podCreationTimestamp="2026-02-16 00:12:49 +0000 UTC" firstStartedPulling="2026-02-16 00:12:50.986119103 +0000 UTC m=+380.644017865" lastFinishedPulling="2026-02-16 00:12:53.485464581 +0000 UTC m=+383.143363373" observedRunningTime="2026-02-16 00:12:54.122395573 +0000 UTC m=+383.780294345" watchObservedRunningTime="2026-02-16 00:12:54.125590637 +0000 UTC m=+383.783489409" Feb 16 00:12:56 crc kubenswrapper[4698]: I0216 00:12:56.078481 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw888" event={"ID":"3b2a749a-0657-4670-8028-451cde6de012","Type":"ContainerStarted","Data":"a554beed405167be683af8579dbb86e06fba86b71fe461022748590666526998"} Feb 16 00:12:56 crc kubenswrapper[4698]: I0216 00:12:56.081370 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9n7sm" event={"ID":"2d6a6915-3d48-4029-a022-26658bb88374","Type":"ContainerStarted","Data":"25ef049952eb907403622f77e9153a5a3b9d3b7c8f118509243153721fefbcb7"} Feb 16 00:12:56 crc kubenswrapper[4698]: I0216 00:12:56.084782 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmzl7" event={"ID":"e4f60670-6117-4fd1-b177-bd5b801a669f","Type":"ContainerStarted","Data":"28fe2690f2d056c0f9266c94335cab089975a198fe1425dea7ba177c3fdb722b"} Feb 16 00:12:56 crc kubenswrapper[4698]: I0216 00:12:56.100270 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pw888" podStartSLOduration=2.644524654 podStartE2EDuration="7.100248923s" podCreationTimestamp="2026-02-16 00:12:49 +0000 UTC" firstStartedPulling="2026-02-16 00:12:50.9970975 +0000 UTC m=+380.654996262" lastFinishedPulling="2026-02-16 00:12:55.452821769 +0000 UTC m=+385.110720531" observedRunningTime="2026-02-16 00:12:56.098981272 +0000 UTC m=+385.756880054" watchObservedRunningTime="2026-02-16 00:12:56.100248923 +0000 UTC m=+385.758147685" Feb 16 00:12:57 crc kubenswrapper[4698]: I0216 00:12:57.046154 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:12:57 crc kubenswrapper[4698]: I0216 00:12:57.046954 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:12:57 crc kubenswrapper[4698]: I0216 00:12:57.093695 4698 generic.go:334] "Generic (PLEG): container finished" podID="e4f60670-6117-4fd1-b177-bd5b801a669f" containerID="28fe2690f2d056c0f9266c94335cab089975a198fe1425dea7ba177c3fdb722b" exitCode=0 Feb 16 00:12:57 crc kubenswrapper[4698]: I0216 00:12:57.093766 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmzl7" event={"ID":"e4f60670-6117-4fd1-b177-bd5b801a669f","Type":"ContainerDied","Data":"28fe2690f2d056c0f9266c94335cab089975a198fe1425dea7ba177c3fdb722b"} Feb 16 00:12:57 crc kubenswrapper[4698]: I0216 00:12:57.096643 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9n7sm" event={"ID":"2d6a6915-3d48-4029-a022-26658bb88374","Type":"ContainerDied","Data":"25ef049952eb907403622f77e9153a5a3b9d3b7c8f118509243153721fefbcb7"} Feb 16 00:12:57 crc kubenswrapper[4698]: I0216 00:12:57.096697 4698 generic.go:334] "Generic (PLEG): container finished" podID="2d6a6915-3d48-4029-a022-26658bb88374" containerID="25ef049952eb907403622f77e9153a5a3b9d3b7c8f118509243153721fefbcb7" exitCode=0 Feb 16 00:12:59 crc kubenswrapper[4698]: I0216 00:12:59.110368 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9n7sm" event={"ID":"2d6a6915-3d48-4029-a022-26658bb88374","Type":"ContainerStarted","Data":"7c915df7221f70389acc72f27b275abbd9e9d1583777fd1e8acae038f4f3842c"} Feb 16 00:12:59 crc kubenswrapper[4698]: I0216 00:12:59.115217 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmzl7" event={"ID":"e4f60670-6117-4fd1-b177-bd5b801a669f","Type":"ContainerStarted","Data":"f8fa6b29c46355d547415f3ab90737d442f30999869534a82b620579ee64784e"} Feb 16 00:12:59 crc kubenswrapper[4698]: I0216 00:12:59.136594 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9n7sm" podStartSLOduration=3.620121612 podStartE2EDuration="7.136572301s" podCreationTimestamp="2026-02-16 00:12:52 +0000 UTC" firstStartedPulling="2026-02-16 00:12:54.062570346 +0000 UTC m=+383.720469108" lastFinishedPulling="2026-02-16 00:12:57.579021015 +0000 UTC m=+387.236919797" observedRunningTime="2026-02-16 00:12:59.136348414 +0000 UTC m=+388.794247176" watchObservedRunningTime="2026-02-16 00:12:59.136572301 +0000 UTC m=+388.794471063" Feb 16 00:12:59 crc kubenswrapper[4698]: I0216 00:12:59.162584 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bmzl7" podStartSLOduration=3.663445813 podStartE2EDuration="7.162560927s" podCreationTimestamp="2026-02-16 00:12:52 +0000 UTC" firstStartedPulling="2026-02-16 00:12:54.052026064 +0000 UTC m=+383.709924826" lastFinishedPulling="2026-02-16 00:12:57.551141178 +0000 UTC m=+387.209039940" observedRunningTime="2026-02-16 00:12:59.159484817 +0000 UTC m=+388.817383589" watchObservedRunningTime="2026-02-16 00:12:59.162560927 +0000 UTC m=+388.820459689" Feb 16 00:12:59 crc kubenswrapper[4698]: I0216 00:12:59.653200 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w9lwt" Feb 16 00:12:59 crc kubenswrapper[4698]: I0216 00:12:59.653564 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w9lwt" Feb 16 00:12:59 crc kubenswrapper[4698]: I0216 00:12:59.702972 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w9lwt" Feb 16 00:12:59 crc kubenswrapper[4698]: I0216 00:12:59.866433 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pw888" Feb 16 00:12:59 crc kubenswrapper[4698]: I0216 00:12:59.866563 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pw888" Feb 16 00:13:00 crc kubenswrapper[4698]: I0216 00:13:00.170574 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w9lwt" Feb 16 00:13:00 crc kubenswrapper[4698]: I0216 00:13:00.910285 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pw888" podUID="3b2a749a-0657-4670-8028-451cde6de012" containerName="registry-server" probeResult="failure" output=< Feb 16 00:13:00 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Feb 16 00:13:00 crc kubenswrapper[4698]: > Feb 16 00:13:02 crc kubenswrapper[4698]: I0216 00:13:02.562510 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9n7sm" Feb 16 00:13:02 crc kubenswrapper[4698]: I0216 00:13:02.562935 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9n7sm" Feb 16 00:13:02 crc kubenswrapper[4698]: I0216 00:13:02.584536 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bmzl7" Feb 16 00:13:02 crc kubenswrapper[4698]: I0216 00:13:02.584589 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bmzl7" Feb 16 00:13:02 crc kubenswrapper[4698]: I0216 00:13:02.608786 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9n7sm" Feb 16 00:13:02 crc kubenswrapper[4698]: I0216 00:13:02.627463 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bmzl7" Feb 16 00:13:03 crc kubenswrapper[4698]: I0216 00:13:03.186720 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bmzl7" Feb 16 00:13:03 crc kubenswrapper[4698]: I0216 00:13:03.187442 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9n7sm" Feb 16 00:13:06 crc kubenswrapper[4698]: I0216 00:13:06.593748 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-whzh9" Feb 16 00:13:06 crc kubenswrapper[4698]: I0216 00:13:06.657321 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hbrff"] Feb 16 00:13:09 crc kubenswrapper[4698]: I0216 00:13:09.926368 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pw888" Feb 16 00:13:09 crc kubenswrapper[4698]: I0216 00:13:09.983396 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pw888" Feb 16 00:13:27 crc kubenswrapper[4698]: I0216 00:13:27.045962 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:13:27 crc kubenswrapper[4698]: I0216 00:13:27.046763 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:13:27 crc kubenswrapper[4698]: I0216 00:13:27.046846 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:13:27 crc kubenswrapper[4698]: I0216 00:13:27.047795 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c48928118e373a7f22de0377bb5928d81fc331d1ecea88c18cef22f90c1e4a6"} pod="openshift-machine-config-operator/machine-config-daemon-z56m2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 00:13:27 crc kubenswrapper[4698]: I0216 00:13:27.047902 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" containerID="cri-o://4c48928118e373a7f22de0377bb5928d81fc331d1ecea88c18cef22f90c1e4a6" gracePeriod=600 Feb 16 00:13:27 crc kubenswrapper[4698]: I0216 00:13:27.305240 4698 generic.go:334] "Generic (PLEG): container finished" podID="7b351654-277f-4d0d-84f9-b003f934936c" containerID="4c48928118e373a7f22de0377bb5928d81fc331d1ecea88c18cef22f90c1e4a6" exitCode=0 Feb 16 00:13:27 crc kubenswrapper[4698]: I0216 00:13:27.305293 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" event={"ID":"7b351654-277f-4d0d-84f9-b003f934936c","Type":"ContainerDied","Data":"4c48928118e373a7f22de0377bb5928d81fc331d1ecea88c18cef22f90c1e4a6"} Feb 16 00:13:27 crc kubenswrapper[4698]: I0216 00:13:27.305336 4698 scope.go:117] "RemoveContainer" containerID="ba31e2cd95e5d2d169b09d4b7dc0cd0181c1bcd559a2be3fad52849affff85e8" Feb 16 00:13:28 crc kubenswrapper[4698]: I0216 00:13:28.314724 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" event={"ID":"7b351654-277f-4d0d-84f9-b003f934936c","Type":"ContainerStarted","Data":"95b91d2cb7e56ab2acf12e0ef16910725a29cc735baa309c370a87fec7d9c648"} Feb 16 00:13:31 crc kubenswrapper[4698]: I0216 00:13:31.701063 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" podUID="9e628ec8-31d7-43de-9c56-58f049dd8935" containerName="registry" containerID="cri-o://90985e89ee3f6346256ff92c05e1e220f34ee37aab15886e16bd0a05fb9571fa" gracePeriod=30 Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.211653 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.346759 4698 generic.go:334] "Generic (PLEG): container finished" podID="9e628ec8-31d7-43de-9c56-58f049dd8935" containerID="90985e89ee3f6346256ff92c05e1e220f34ee37aab15886e16bd0a05fb9571fa" exitCode=0 Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.346835 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" event={"ID":"9e628ec8-31d7-43de-9c56-58f049dd8935","Type":"ContainerDied","Data":"90985e89ee3f6346256ff92c05e1e220f34ee37aab15886e16bd0a05fb9571fa"} Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.346880 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" event={"ID":"9e628ec8-31d7-43de-9c56-58f049dd8935","Type":"ContainerDied","Data":"1882cc046b514fdf1d66a6d8c46f5d5ee098f06de2de6aa7ac73ffb7754107b3"} Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.346918 4698 scope.go:117] "RemoveContainer" containerID="90985e89ee3f6346256ff92c05e1e220f34ee37aab15886e16bd0a05fb9571fa" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.347160 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hbrff" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.375058 4698 scope.go:117] "RemoveContainer" containerID="90985e89ee3f6346256ff92c05e1e220f34ee37aab15886e16bd0a05fb9571fa" Feb 16 00:13:32 crc kubenswrapper[4698]: E0216 00:13:32.375861 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90985e89ee3f6346256ff92c05e1e220f34ee37aab15886e16bd0a05fb9571fa\": container with ID starting with 90985e89ee3f6346256ff92c05e1e220f34ee37aab15886e16bd0a05fb9571fa not found: ID does not exist" containerID="90985e89ee3f6346256ff92c05e1e220f34ee37aab15886e16bd0a05fb9571fa" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.375934 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90985e89ee3f6346256ff92c05e1e220f34ee37aab15886e16bd0a05fb9571fa"} err="failed to get container status \"90985e89ee3f6346256ff92c05e1e220f34ee37aab15886e16bd0a05fb9571fa\": rpc error: code = NotFound desc = could not find container \"90985e89ee3f6346256ff92c05e1e220f34ee37aab15886e16bd0a05fb9571fa\": container with ID starting with 90985e89ee3f6346256ff92c05e1e220f34ee37aab15886e16bd0a05fb9571fa not found: ID does not exist" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.390890 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e628ec8-31d7-43de-9c56-58f049dd8935-bound-sa-token\") pod \"9e628ec8-31d7-43de-9c56-58f049dd8935\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.391077 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e628ec8-31d7-43de-9c56-58f049dd8935-ca-trust-extracted\") pod \"9e628ec8-31d7-43de-9c56-58f049dd8935\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.391495 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"9e628ec8-31d7-43de-9c56-58f049dd8935\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.391659 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e628ec8-31d7-43de-9c56-58f049dd8935-trusted-ca\") pod \"9e628ec8-31d7-43de-9c56-58f049dd8935\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.391726 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnncf\" (UniqueName: \"kubernetes.io/projected/9e628ec8-31d7-43de-9c56-58f049dd8935-kube-api-access-qnncf\") pod \"9e628ec8-31d7-43de-9c56-58f049dd8935\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.391774 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e628ec8-31d7-43de-9c56-58f049dd8935-registry-certificates\") pod \"9e628ec8-31d7-43de-9c56-58f049dd8935\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.391837 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e628ec8-31d7-43de-9c56-58f049dd8935-installation-pull-secrets\") pod \"9e628ec8-31d7-43de-9c56-58f049dd8935\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.391888 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e628ec8-31d7-43de-9c56-58f049dd8935-registry-tls\") pod \"9e628ec8-31d7-43de-9c56-58f049dd8935\" (UID: \"9e628ec8-31d7-43de-9c56-58f049dd8935\") " Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.392790 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e628ec8-31d7-43de-9c56-58f049dd8935-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e628ec8-31d7-43de-9c56-58f049dd8935" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.393111 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e628ec8-31d7-43de-9c56-58f049dd8935-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e628ec8-31d7-43de-9c56-58f049dd8935" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.401303 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e628ec8-31d7-43de-9c56-58f049dd8935-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e628ec8-31d7-43de-9c56-58f049dd8935" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.401357 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e628ec8-31d7-43de-9c56-58f049dd8935-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e628ec8-31d7-43de-9c56-58f049dd8935" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.401685 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e628ec8-31d7-43de-9c56-58f049dd8935-kube-api-access-qnncf" (OuterVolumeSpecName: "kube-api-access-qnncf") pod "9e628ec8-31d7-43de-9c56-58f049dd8935" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935"). InnerVolumeSpecName "kube-api-access-qnncf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.401892 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e628ec8-31d7-43de-9c56-58f049dd8935-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e628ec8-31d7-43de-9c56-58f049dd8935" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.407967 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "9e628ec8-31d7-43de-9c56-58f049dd8935" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.415471 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e628ec8-31d7-43de-9c56-58f049dd8935-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e628ec8-31d7-43de-9c56-58f049dd8935" (UID: "9e628ec8-31d7-43de-9c56-58f049dd8935"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.493018 4698 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e628ec8-31d7-43de-9c56-58f049dd8935-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.493056 4698 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e628ec8-31d7-43de-9c56-58f049dd8935-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.493069 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e628ec8-31d7-43de-9c56-58f049dd8935-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.493082 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnncf\" (UniqueName: \"kubernetes.io/projected/9e628ec8-31d7-43de-9c56-58f049dd8935-kube-api-access-qnncf\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.493098 4698 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e628ec8-31d7-43de-9c56-58f049dd8935-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.493111 4698 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e628ec8-31d7-43de-9c56-58f049dd8935-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.493122 4698 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e628ec8-31d7-43de-9c56-58f049dd8935-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.698818 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hbrff"] Feb 16 00:13:32 crc kubenswrapper[4698]: I0216 00:13:32.704447 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hbrff"] Feb 16 00:13:33 crc kubenswrapper[4698]: I0216 00:13:33.242769 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e628ec8-31d7-43de-9c56-58f049dd8935" path="/var/lib/kubelet/pods/9e628ec8-31d7-43de-9c56-58f049dd8935/volumes" Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.184607 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl"] Feb 16 00:15:00 crc kubenswrapper[4698]: E0216 00:15:00.186135 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e628ec8-31d7-43de-9c56-58f049dd8935" containerName="registry" Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.186156 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e628ec8-31d7-43de-9c56-58f049dd8935" containerName="registry" Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.186279 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e628ec8-31d7-43de-9c56-58f049dd8935" containerName="registry" Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.187067 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl" Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.193350 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.193565 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl"] Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.193788 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.209925 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e78d7277-3213-4863-93b2-7fd6dbef2509-config-volume\") pod \"collect-profiles-29520015-h5fcl\" (UID: \"e78d7277-3213-4863-93b2-7fd6dbef2509\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl" Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.210060 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56p8t\" (UniqueName: \"kubernetes.io/projected/e78d7277-3213-4863-93b2-7fd6dbef2509-kube-api-access-56p8t\") pod \"collect-profiles-29520015-h5fcl\" (UID: \"e78d7277-3213-4863-93b2-7fd6dbef2509\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl" Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.210097 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e78d7277-3213-4863-93b2-7fd6dbef2509-secret-volume\") pod \"collect-profiles-29520015-h5fcl\" (UID: \"e78d7277-3213-4863-93b2-7fd6dbef2509\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl" Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.312331 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e78d7277-3213-4863-93b2-7fd6dbef2509-config-volume\") pod \"collect-profiles-29520015-h5fcl\" (UID: \"e78d7277-3213-4863-93b2-7fd6dbef2509\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl" Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.313953 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e78d7277-3213-4863-93b2-7fd6dbef2509-config-volume\") pod \"collect-profiles-29520015-h5fcl\" (UID: \"e78d7277-3213-4863-93b2-7fd6dbef2509\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl" Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.314175 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56p8t\" (UniqueName: \"kubernetes.io/projected/e78d7277-3213-4863-93b2-7fd6dbef2509-kube-api-access-56p8t\") pod \"collect-profiles-29520015-h5fcl\" (UID: \"e78d7277-3213-4863-93b2-7fd6dbef2509\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl" Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.314767 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e78d7277-3213-4863-93b2-7fd6dbef2509-secret-volume\") pod \"collect-profiles-29520015-h5fcl\" (UID: \"e78d7277-3213-4863-93b2-7fd6dbef2509\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl" Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.322869 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e78d7277-3213-4863-93b2-7fd6dbef2509-secret-volume\") pod \"collect-profiles-29520015-h5fcl\" (UID: \"e78d7277-3213-4863-93b2-7fd6dbef2509\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl" Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.345803 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56p8t\" (UniqueName: \"kubernetes.io/projected/e78d7277-3213-4863-93b2-7fd6dbef2509-kube-api-access-56p8t\") pod \"collect-profiles-29520015-h5fcl\" (UID: \"e78d7277-3213-4863-93b2-7fd6dbef2509\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl" Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.517913 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl" Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.737026 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl"] Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.989050 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl" event={"ID":"e78d7277-3213-4863-93b2-7fd6dbef2509","Type":"ContainerStarted","Data":"867319dafb09451151dc24d7bef0bbd9c4195e0060e7b2a2af8b71330eedaf36"} Feb 16 00:15:00 crc kubenswrapper[4698]: I0216 00:15:00.989560 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl" event={"ID":"e78d7277-3213-4863-93b2-7fd6dbef2509","Type":"ContainerStarted","Data":"8af97c2cc23fc703d8561da3c165a708211a17418210d9e78dd7b80227b29cfb"} Feb 16 00:15:01 crc kubenswrapper[4698]: I0216 00:15:01.008197 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl" podStartSLOduration=1.008163773 podStartE2EDuration="1.008163773s" podCreationTimestamp="2026-02-16 00:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:15:01.004572691 +0000 UTC m=+510.662471453" watchObservedRunningTime="2026-02-16 00:15:01.008163773 +0000 UTC m=+510.666062545" Feb 16 00:15:01 crc kubenswrapper[4698]: I0216 00:15:01.998778 4698 generic.go:334] "Generic (PLEG): container finished" podID="e78d7277-3213-4863-93b2-7fd6dbef2509" containerID="867319dafb09451151dc24d7bef0bbd9c4195e0060e7b2a2af8b71330eedaf36" exitCode=0 Feb 16 00:15:01 crc kubenswrapper[4698]: I0216 00:15:01.998883 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl" event={"ID":"e78d7277-3213-4863-93b2-7fd6dbef2509","Type":"ContainerDied","Data":"867319dafb09451151dc24d7bef0bbd9c4195e0060e7b2a2af8b71330eedaf36"} Feb 16 00:15:03 crc kubenswrapper[4698]: I0216 00:15:03.302427 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl" Feb 16 00:15:03 crc kubenswrapper[4698]: I0216 00:15:03.461180 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e78d7277-3213-4863-93b2-7fd6dbef2509-config-volume\") pod \"e78d7277-3213-4863-93b2-7fd6dbef2509\" (UID: \"e78d7277-3213-4863-93b2-7fd6dbef2509\") " Feb 16 00:15:03 crc kubenswrapper[4698]: I0216 00:15:03.461369 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e78d7277-3213-4863-93b2-7fd6dbef2509-secret-volume\") pod \"e78d7277-3213-4863-93b2-7fd6dbef2509\" (UID: \"e78d7277-3213-4863-93b2-7fd6dbef2509\") " Feb 16 00:15:03 crc kubenswrapper[4698]: I0216 00:15:03.462693 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e78d7277-3213-4863-93b2-7fd6dbef2509-config-volume" (OuterVolumeSpecName: "config-volume") pod "e78d7277-3213-4863-93b2-7fd6dbef2509" (UID: "e78d7277-3213-4863-93b2-7fd6dbef2509"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:15:03 crc kubenswrapper[4698]: I0216 00:15:03.462826 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56p8t\" (UniqueName: \"kubernetes.io/projected/e78d7277-3213-4863-93b2-7fd6dbef2509-kube-api-access-56p8t\") pod \"e78d7277-3213-4863-93b2-7fd6dbef2509\" (UID: \"e78d7277-3213-4863-93b2-7fd6dbef2509\") " Feb 16 00:15:03 crc kubenswrapper[4698]: I0216 00:15:03.463057 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e78d7277-3213-4863-93b2-7fd6dbef2509-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 00:15:03 crc kubenswrapper[4698]: I0216 00:15:03.469810 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e78d7277-3213-4863-93b2-7fd6dbef2509-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e78d7277-3213-4863-93b2-7fd6dbef2509" (UID: "e78d7277-3213-4863-93b2-7fd6dbef2509"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:15:03 crc kubenswrapper[4698]: I0216 00:15:03.469815 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e78d7277-3213-4863-93b2-7fd6dbef2509-kube-api-access-56p8t" (OuterVolumeSpecName: "kube-api-access-56p8t") pod "e78d7277-3213-4863-93b2-7fd6dbef2509" (UID: "e78d7277-3213-4863-93b2-7fd6dbef2509"). InnerVolumeSpecName "kube-api-access-56p8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:15:03 crc kubenswrapper[4698]: I0216 00:15:03.564993 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e78d7277-3213-4863-93b2-7fd6dbef2509-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 00:15:03 crc kubenswrapper[4698]: I0216 00:15:03.565055 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56p8t\" (UniqueName: \"kubernetes.io/projected/e78d7277-3213-4863-93b2-7fd6dbef2509-kube-api-access-56p8t\") on node \"crc\" DevicePath \"\"" Feb 16 00:15:04 crc kubenswrapper[4698]: I0216 00:15:04.018683 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl" event={"ID":"e78d7277-3213-4863-93b2-7fd6dbef2509","Type":"ContainerDied","Data":"8af97c2cc23fc703d8561da3c165a708211a17418210d9e78dd7b80227b29cfb"} Feb 16 00:15:04 crc kubenswrapper[4698]: I0216 00:15:04.019057 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8af97c2cc23fc703d8561da3c165a708211a17418210d9e78dd7b80227b29cfb" Feb 16 00:15:04 crc kubenswrapper[4698]: I0216 00:15:04.018747 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-h5fcl" Feb 16 00:15:27 crc kubenswrapper[4698]: I0216 00:15:27.045480 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:15:27 crc kubenswrapper[4698]: I0216 00:15:27.046323 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:15:57 crc kubenswrapper[4698]: I0216 00:15:57.046175 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:15:57 crc kubenswrapper[4698]: I0216 00:15:57.047238 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:16:27 crc kubenswrapper[4698]: I0216 00:16:27.046420 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:16:27 crc kubenswrapper[4698]: I0216 00:16:27.047543 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:16:27 crc kubenswrapper[4698]: I0216 00:16:27.047717 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:16:27 crc kubenswrapper[4698]: I0216 00:16:27.048951 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"95b91d2cb7e56ab2acf12e0ef16910725a29cc735baa309c370a87fec7d9c648"} pod="openshift-machine-config-operator/machine-config-daemon-z56m2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 00:16:27 crc kubenswrapper[4698]: I0216 00:16:27.049032 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" containerID="cri-o://95b91d2cb7e56ab2acf12e0ef16910725a29cc735baa309c370a87fec7d9c648" gracePeriod=600 Feb 16 00:16:27 crc kubenswrapper[4698]: I0216 00:16:27.610590 4698 generic.go:334] "Generic (PLEG): container finished" podID="7b351654-277f-4d0d-84f9-b003f934936c" containerID="95b91d2cb7e56ab2acf12e0ef16910725a29cc735baa309c370a87fec7d9c648" exitCode=0 Feb 16 00:16:27 crc kubenswrapper[4698]: I0216 00:16:27.610641 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" event={"ID":"7b351654-277f-4d0d-84f9-b003f934936c","Type":"ContainerDied","Data":"95b91d2cb7e56ab2acf12e0ef16910725a29cc735baa309c370a87fec7d9c648"} Feb 16 00:16:27 crc kubenswrapper[4698]: I0216 00:16:27.611108 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" event={"ID":"7b351654-277f-4d0d-84f9-b003f934936c","Type":"ContainerStarted","Data":"28455df6b45ac3d964cdd4d7f6adb7fb0a6e0a48a0dcb629da0d78838dbdbdad"} Feb 16 00:16:27 crc kubenswrapper[4698]: I0216 00:16:27.611133 4698 scope.go:117] "RemoveContainer" containerID="4c48928118e373a7f22de0377bb5928d81fc331d1ecea88c18cef22f90c1e4a6" Feb 16 00:16:31 crc kubenswrapper[4698]: I0216 00:16:31.541056 4698 scope.go:117] "RemoveContainer" containerID="cb02c744cadf5aca9a2d93dc156a2001de075d82f5b9609557610d270c021b3c" Feb 16 00:17:34 crc kubenswrapper[4698]: I0216 00:17:34.926887 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rmrt5"] Feb 16 00:17:34 crc kubenswrapper[4698]: I0216 00:17:34.928175 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovn-controller" containerID="cri-o://201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b" gracePeriod=30 Feb 16 00:17:34 crc kubenswrapper[4698]: I0216 00:17:34.928287 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="nbdb" containerID="cri-o://88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b" gracePeriod=30 Feb 16 00:17:34 crc kubenswrapper[4698]: I0216 00:17:34.928333 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="kube-rbac-proxy-node" containerID="cri-o://dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50" gracePeriod=30 Feb 16 00:17:34 crc kubenswrapper[4698]: I0216 00:17:34.928384 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovn-acl-logging" containerID="cri-o://6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef" gracePeriod=30 Feb 16 00:17:34 crc kubenswrapper[4698]: I0216 00:17:34.928330 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9" gracePeriod=30 Feb 16 00:17:34 crc kubenswrapper[4698]: I0216 00:17:34.928558 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="northd" containerID="cri-o://849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703" gracePeriod=30 Feb 16 00:17:34 crc kubenswrapper[4698]: I0216 00:17:34.928724 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="sbdb" containerID="cri-o://3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb" gracePeriod=30 Feb 16 00:17:34 crc kubenswrapper[4698]: I0216 00:17:34.977097 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovnkube-controller" containerID="cri-o://db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e" gracePeriod=30 Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.088559 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovnkube-controller/3.log" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.091860 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovn-acl-logging/0.log" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.092382 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovn-controller/0.log" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.093239 4698 generic.go:334] "Generic (PLEG): container finished" podID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerID="234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9" exitCode=0 Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.093270 4698 generic.go:334] "Generic (PLEG): container finished" podID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerID="dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50" exitCode=0 Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.093278 4698 generic.go:334] "Generic (PLEG): container finished" podID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerID="6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef" exitCode=143 Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.093287 4698 generic.go:334] "Generic (PLEG): container finished" podID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerID="201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b" exitCode=143 Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.093332 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerDied","Data":"234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9"} Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.093373 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerDied","Data":"dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50"} Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.093391 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerDied","Data":"6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef"} Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.093405 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerDied","Data":"201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b"} Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.105439 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dv2d_69838a3a-c20d-4770-b95f-ab85a265d53c/kube-multus/2.log" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.105905 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dv2d_69838a3a-c20d-4770-b95f-ab85a265d53c/kube-multus/1.log" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.105950 4698 generic.go:334] "Generic (PLEG): container finished" podID="69838a3a-c20d-4770-b95f-ab85a265d53c" containerID="89b1308232f81e46ec49509566a9454686396ff65a1b76bf4537910414500054" exitCode=2 Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.105986 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dv2d" event={"ID":"69838a3a-c20d-4770-b95f-ab85a265d53c","Type":"ContainerDied","Data":"89b1308232f81e46ec49509566a9454686396ff65a1b76bf4537910414500054"} Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.106030 4698 scope.go:117] "RemoveContainer" containerID="0e92aaa262728b5ab9af6556d29d5558d08822fe1b06333269be4b1ed3a7abc2" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.106478 4698 scope.go:117] "RemoveContainer" containerID="89b1308232f81e46ec49509566a9454686396ff65a1b76bf4537910414500054" Feb 16 00:17:35 crc kubenswrapper[4698]: E0216 00:17:35.107028 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2dv2d_openshift-multus(69838a3a-c20d-4770-b95f-ab85a265d53c)\"" pod="openshift-multus/multus-2dv2d" podUID="69838a3a-c20d-4770-b95f-ab85a265d53c" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.240755 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovnkube-controller/3.log" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.243263 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovn-acl-logging/0.log" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.243793 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovn-controller/0.log" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.244303 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325307 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g2kh7"] Feb 16 00:17:35 crc kubenswrapper[4698]: E0216 00:17:35.325634 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="kube-rbac-proxy-node" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325648 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="kube-rbac-proxy-node" Feb 16 00:17:35 crc kubenswrapper[4698]: E0216 00:17:35.325665 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="sbdb" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325674 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="sbdb" Feb 16 00:17:35 crc kubenswrapper[4698]: E0216 00:17:35.325684 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovnkube-controller" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325690 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovnkube-controller" Feb 16 00:17:35 crc kubenswrapper[4698]: E0216 00:17:35.325699 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovnkube-controller" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325705 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovnkube-controller" Feb 16 00:17:35 crc kubenswrapper[4698]: E0216 00:17:35.325713 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovnkube-controller" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325718 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovnkube-controller" Feb 16 00:17:35 crc kubenswrapper[4698]: E0216 00:17:35.325730 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovn-controller" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325736 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovn-controller" Feb 16 00:17:35 crc kubenswrapper[4698]: E0216 00:17:35.325746 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovn-acl-logging" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325752 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovn-acl-logging" Feb 16 00:17:35 crc kubenswrapper[4698]: E0216 00:17:35.325758 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="northd" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325765 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="northd" Feb 16 00:17:35 crc kubenswrapper[4698]: E0216 00:17:35.325772 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="nbdb" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325778 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="nbdb" Feb 16 00:17:35 crc kubenswrapper[4698]: E0216 00:17:35.325787 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="kubecfg-setup" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325792 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="kubecfg-setup" Feb 16 00:17:35 crc kubenswrapper[4698]: E0216 00:17:35.325800 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325806 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 00:17:35 crc kubenswrapper[4698]: E0216 00:17:35.325814 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78d7277-3213-4863-93b2-7fd6dbef2509" containerName="collect-profiles" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325819 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78d7277-3213-4863-93b2-7fd6dbef2509" containerName="collect-profiles" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325916 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="nbdb" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325927 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325936 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovnkube-controller" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325945 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e78d7277-3213-4863-93b2-7fd6dbef2509" containerName="collect-profiles" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325954 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="northd" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325961 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovnkube-controller" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325966 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovnkube-controller" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325974 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovn-acl-logging" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325982 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovnkube-controller" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325988 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovn-controller" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.325997 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="sbdb" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.326004 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="kube-rbac-proxy-node" Feb 16 00:17:35 crc kubenswrapper[4698]: E0216 00:17:35.326105 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovnkube-controller" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.326112 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovnkube-controller" Feb 16 00:17:35 crc kubenswrapper[4698]: E0216 00:17:35.326119 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovnkube-controller" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.326124 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovnkube-controller" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.326222 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerName="ovnkube-controller" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.328164 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.407913 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-ovn-node-metrics-cert\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.408023 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-ovnkube-script-lib\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.408078 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-cni-bin\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.408141 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-ovnkube-config\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.408198 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-kubelet\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.409300 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbd26\" (UniqueName: \"kubernetes.io/projected/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-kube-api-access-kbd26\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.409393 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-run-systemd\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.409450 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-run-openvswitch\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.409518 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-node-log\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.408332 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.408350 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.409669 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.409127 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.409155 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.409738 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-node-log" (OuterVolumeSpecName: "node-log") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.409843 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.410315 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-systemd-units\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.410384 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-log-socket\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.410434 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-log-socket" (OuterVolumeSpecName: "log-socket") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.410485 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-slash\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.410527 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-slash" (OuterVolumeSpecName: "host-slash") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.410550 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-etc-openvswitch\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.410594 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.410630 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.410757 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.410895 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-cni-netd\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.410933 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-run-ovn-kubernetes\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.410983 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-env-overrides\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.411002 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.411020 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-run-netns\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.411051 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-run-ovn\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.411086 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-var-lib-openvswitch\") pod \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\" (UID: \"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0\") " Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.411113 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.411047 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.411144 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.411262 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.411740 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.411925 4698 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.412441 4698 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-log-socket\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.412509 4698 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-slash\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.412561 4698 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.412583 4698 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.412638 4698 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.412661 4698 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.412681 4698 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.412698 4698 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.412715 4698 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.412731 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.412748 4698 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.412764 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.412781 4698 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.412799 4698 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.412818 4698 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-node-log\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.415969 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.417505 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-kube-api-access-kbd26" (OuterVolumeSpecName: "kube-api-access-kbd26") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "kube-api-access-kbd26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.433903 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" (UID: "cea3368d-30b3-4bf5-8c91-a6b9c254eaf0"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.514634 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-kubelet\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.514700 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-run-ovn-kubernetes\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.514738 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2fa6db6a-990c-40b7-bcb8-43e715b34830-ovnkube-script-lib\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.514790 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-etc-openvswitch\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.514828 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-var-lib-openvswitch\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.514927 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-run-openvswitch\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.515057 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-cni-bin\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.515167 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt2bm\" (UniqueName: \"kubernetes.io/projected/2fa6db6a-990c-40b7-bcb8-43e715b34830-kube-api-access-dt2bm\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.515254 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-slash\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.515530 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-run-netns\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.515656 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-log-socket\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.515757 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fa6db6a-990c-40b7-bcb8-43e715b34830-env-overrides\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.515823 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fa6db6a-990c-40b7-bcb8-43e715b34830-ovn-node-metrics-cert\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.515870 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-run-systemd\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.515916 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-systemd-units\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.516002 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-node-log\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.516075 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-cni-netd\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.516137 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.516252 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fa6db6a-990c-40b7-bcb8-43e715b34830-ovnkube-config\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.516339 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-run-ovn\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.516450 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbd26\" (UniqueName: \"kubernetes.io/projected/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-kube-api-access-kbd26\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.516483 4698 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.516507 4698 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.516528 4698 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.618554 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fa6db6a-990c-40b7-bcb8-43e715b34830-ovnkube-config\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.618682 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-run-ovn\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.618762 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-kubelet\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.618787 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-run-ovn\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.619053 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-kubelet\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.619331 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-run-ovn-kubernetes\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.619390 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2fa6db6a-990c-40b7-bcb8-43e715b34830-ovnkube-script-lib\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.619507 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-run-ovn-kubernetes\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.619908 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-etc-openvswitch\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.619988 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-etc-openvswitch\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.619997 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-var-lib-openvswitch\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620036 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fa6db6a-990c-40b7-bcb8-43e715b34830-ovnkube-config\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620057 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-var-lib-openvswitch\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620062 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-run-openvswitch\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620143 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-cni-bin\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620096 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-run-openvswitch\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620200 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt2bm\" (UniqueName: \"kubernetes.io/projected/2fa6db6a-990c-40b7-bcb8-43e715b34830-kube-api-access-dt2bm\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620251 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-slash\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620256 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-cni-bin\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620327 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-slash\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620366 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-run-netns\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620408 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-log-socket\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620450 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fa6db6a-990c-40b7-bcb8-43e715b34830-env-overrides\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620492 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-run-systemd\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620535 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fa6db6a-990c-40b7-bcb8-43e715b34830-ovn-node-metrics-cert\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620580 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-systemd-units\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620669 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-node-log\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620719 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-cni-netd\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620764 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620867 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2fa6db6a-990c-40b7-bcb8-43e715b34830-ovnkube-script-lib\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620909 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620969 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-node-log\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620977 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-run-systemd\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.621025 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-cni-netd\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.621090 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-log-socket\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.621135 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-host-run-netns\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.620983 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2fa6db6a-990c-40b7-bcb8-43e715b34830-systemd-units\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.621959 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fa6db6a-990c-40b7-bcb8-43e715b34830-env-overrides\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.626023 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fa6db6a-990c-40b7-bcb8-43e715b34830-ovn-node-metrics-cert\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.651565 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt2bm\" (UniqueName: \"kubernetes.io/projected/2fa6db6a-990c-40b7-bcb8-43e715b34830-kube-api-access-dt2bm\") pod \"ovnkube-node-g2kh7\" (UID: \"2fa6db6a-990c-40b7-bcb8-43e715b34830\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:35 crc kubenswrapper[4698]: I0216 00:17:35.951206 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.116298 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dv2d_69838a3a-c20d-4770-b95f-ab85a265d53c/kube-multus/2.log" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.119395 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovnkube-controller/3.log" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.123366 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovn-acl-logging/0.log" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.124220 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmrt5_cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/ovn-controller/0.log" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.124753 4698 generic.go:334] "Generic (PLEG): container finished" podID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerID="db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e" exitCode=0 Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.124787 4698 generic.go:334] "Generic (PLEG): container finished" podID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerID="3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb" exitCode=0 Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.124799 4698 generic.go:334] "Generic (PLEG): container finished" podID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerID="88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b" exitCode=0 Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.124810 4698 generic.go:334] "Generic (PLEG): container finished" podID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" containerID="849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703" exitCode=0 Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.124845 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerDied","Data":"db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e"} Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.124891 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerDied","Data":"3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb"} Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.124905 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerDied","Data":"88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b"} Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.124919 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerDied","Data":"849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703"} Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.124931 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" event={"ID":"cea3368d-30b3-4bf5-8c91-a6b9c254eaf0","Type":"ContainerDied","Data":"2a6b2009aec728237922dcd0d0eedf86aec7ca2849f56baf5f94648de61d1adf"} Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.124940 4698 scope.go:117] "RemoveContainer" containerID="db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.124938 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rmrt5" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.127183 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" event={"ID":"2fa6db6a-990c-40b7-bcb8-43e715b34830","Type":"ContainerStarted","Data":"51c49e5df096bfc52b78041332e8c3d17504dc99ee67060d38e26fbbd8139702"} Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.154902 4698 scope.go:117] "RemoveContainer" containerID="651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.182813 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rmrt5"] Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.183253 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rmrt5"] Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.190269 4698 scope.go:117] "RemoveContainer" containerID="3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.280126 4698 scope.go:117] "RemoveContainer" containerID="88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.305458 4698 scope.go:117] "RemoveContainer" containerID="849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.326180 4698 scope.go:117] "RemoveContainer" containerID="234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.346028 4698 scope.go:117] "RemoveContainer" containerID="dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.368429 4698 scope.go:117] "RemoveContainer" containerID="6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.383745 4698 scope.go:117] "RemoveContainer" containerID="201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.402563 4698 scope.go:117] "RemoveContainer" containerID="bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.428680 4698 scope.go:117] "RemoveContainer" containerID="db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e" Feb 16 00:17:36 crc kubenswrapper[4698]: E0216 00:17:36.429454 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e\": container with ID starting with db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e not found: ID does not exist" containerID="db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.429509 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e"} err="failed to get container status \"db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e\": rpc error: code = NotFound desc = could not find container \"db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e\": container with ID starting with db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.429543 4698 scope.go:117] "RemoveContainer" containerID="651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27" Feb 16 00:17:36 crc kubenswrapper[4698]: E0216 00:17:36.430223 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\": container with ID starting with 651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27 not found: ID does not exist" containerID="651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.430285 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27"} err="failed to get container status \"651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\": rpc error: code = NotFound desc = could not find container \"651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\": container with ID starting with 651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27 not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.430334 4698 scope.go:117] "RemoveContainer" containerID="3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb" Feb 16 00:17:36 crc kubenswrapper[4698]: E0216 00:17:36.430962 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\": container with ID starting with 3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb not found: ID does not exist" containerID="3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.430998 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb"} err="failed to get container status \"3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\": rpc error: code = NotFound desc = could not find container \"3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\": container with ID starting with 3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.431019 4698 scope.go:117] "RemoveContainer" containerID="88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b" Feb 16 00:17:36 crc kubenswrapper[4698]: E0216 00:17:36.431319 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\": container with ID starting with 88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b not found: ID does not exist" containerID="88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.431360 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b"} err="failed to get container status \"88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\": rpc error: code = NotFound desc = could not find container \"88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\": container with ID starting with 88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.431392 4698 scope.go:117] "RemoveContainer" containerID="849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703" Feb 16 00:17:36 crc kubenswrapper[4698]: E0216 00:17:36.431857 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\": container with ID starting with 849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703 not found: ID does not exist" containerID="849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.431892 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703"} err="failed to get container status \"849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\": rpc error: code = NotFound desc = could not find container \"849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\": container with ID starting with 849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703 not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.431922 4698 scope.go:117] "RemoveContainer" containerID="234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9" Feb 16 00:17:36 crc kubenswrapper[4698]: E0216 00:17:36.432399 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\": container with ID starting with 234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9 not found: ID does not exist" containerID="234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.432434 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9"} err="failed to get container status \"234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\": rpc error: code = NotFound desc = could not find container \"234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\": container with ID starting with 234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9 not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.432459 4698 scope.go:117] "RemoveContainer" containerID="dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50" Feb 16 00:17:36 crc kubenswrapper[4698]: E0216 00:17:36.432749 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\": container with ID starting with dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50 not found: ID does not exist" containerID="dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.432772 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50"} err="failed to get container status \"dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\": rpc error: code = NotFound desc = could not find container \"dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\": container with ID starting with dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50 not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.432787 4698 scope.go:117] "RemoveContainer" containerID="6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef" Feb 16 00:17:36 crc kubenswrapper[4698]: E0216 00:17:36.433056 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\": container with ID starting with 6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef not found: ID does not exist" containerID="6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.433088 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef"} err="failed to get container status \"6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\": rpc error: code = NotFound desc = could not find container \"6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\": container with ID starting with 6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.433118 4698 scope.go:117] "RemoveContainer" containerID="201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b" Feb 16 00:17:36 crc kubenswrapper[4698]: E0216 00:17:36.433439 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\": container with ID starting with 201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b not found: ID does not exist" containerID="201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.433462 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b"} err="failed to get container status \"201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\": rpc error: code = NotFound desc = could not find container \"201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\": container with ID starting with 201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.433477 4698 scope.go:117] "RemoveContainer" containerID="bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c" Feb 16 00:17:36 crc kubenswrapper[4698]: E0216 00:17:36.433773 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\": container with ID starting with bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c not found: ID does not exist" containerID="bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.433809 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c"} err="failed to get container status \"bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\": rpc error: code = NotFound desc = could not find container \"bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\": container with ID starting with bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.433835 4698 scope.go:117] "RemoveContainer" containerID="db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.434095 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e"} err="failed to get container status \"db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e\": rpc error: code = NotFound desc = could not find container \"db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e\": container with ID starting with db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.434114 4698 scope.go:117] "RemoveContainer" containerID="651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.434361 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27"} err="failed to get container status \"651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\": rpc error: code = NotFound desc = could not find container \"651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\": container with ID starting with 651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27 not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.434391 4698 scope.go:117] "RemoveContainer" containerID="3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.434682 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb"} err="failed to get container status \"3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\": rpc error: code = NotFound desc = could not find container \"3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\": container with ID starting with 3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.434710 4698 scope.go:117] "RemoveContainer" containerID="88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.435001 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b"} err="failed to get container status \"88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\": rpc error: code = NotFound desc = could not find container \"88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\": container with ID starting with 88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.435021 4698 scope.go:117] "RemoveContainer" containerID="849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.435361 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703"} err="failed to get container status \"849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\": rpc error: code = NotFound desc = could not find container \"849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\": container with ID starting with 849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703 not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.435387 4698 scope.go:117] "RemoveContainer" containerID="234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.435857 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9"} err="failed to get container status \"234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\": rpc error: code = NotFound desc = could not find container \"234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\": container with ID starting with 234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9 not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.435874 4698 scope.go:117] "RemoveContainer" containerID="dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.436145 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50"} err="failed to get container status \"dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\": rpc error: code = NotFound desc = could not find container \"dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\": container with ID starting with dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50 not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.436161 4698 scope.go:117] "RemoveContainer" containerID="6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.436690 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef"} err="failed to get container status \"6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\": rpc error: code = NotFound desc = could not find container \"6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\": container with ID starting with 6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.436714 4698 scope.go:117] "RemoveContainer" containerID="201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.437150 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b"} err="failed to get container status \"201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\": rpc error: code = NotFound desc = could not find container \"201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\": container with ID starting with 201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.437195 4698 scope.go:117] "RemoveContainer" containerID="bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.437451 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c"} err="failed to get container status \"bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\": rpc error: code = NotFound desc = could not find container \"bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\": container with ID starting with bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.437482 4698 scope.go:117] "RemoveContainer" containerID="db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.437872 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e"} err="failed to get container status \"db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e\": rpc error: code = NotFound desc = could not find container \"db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e\": container with ID starting with db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.437901 4698 scope.go:117] "RemoveContainer" containerID="651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.438225 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27"} err="failed to get container status \"651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\": rpc error: code = NotFound desc = could not find container \"651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\": container with ID starting with 651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27 not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.438253 4698 scope.go:117] "RemoveContainer" containerID="3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.438594 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb"} err="failed to get container status \"3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\": rpc error: code = NotFound desc = could not find container \"3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\": container with ID starting with 3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.438632 4698 scope.go:117] "RemoveContainer" containerID="88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.438907 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b"} err="failed to get container status \"88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\": rpc error: code = NotFound desc = could not find container \"88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\": container with ID starting with 88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.438958 4698 scope.go:117] "RemoveContainer" containerID="849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.439356 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703"} err="failed to get container status \"849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\": rpc error: code = NotFound desc = could not find container \"849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\": container with ID starting with 849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703 not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.439391 4698 scope.go:117] "RemoveContainer" containerID="234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.439700 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9"} err="failed to get container status \"234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\": rpc error: code = NotFound desc = could not find container \"234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\": container with ID starting with 234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9 not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.439727 4698 scope.go:117] "RemoveContainer" containerID="dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.440048 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50"} err="failed to get container status \"dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\": rpc error: code = NotFound desc = could not find container \"dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\": container with ID starting with dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50 not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.440091 4698 scope.go:117] "RemoveContainer" containerID="6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.440453 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef"} err="failed to get container status \"6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\": rpc error: code = NotFound desc = could not find container \"6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\": container with ID starting with 6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.440481 4698 scope.go:117] "RemoveContainer" containerID="201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.440977 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b"} err="failed to get container status \"201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\": rpc error: code = NotFound desc = could not find container \"201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\": container with ID starting with 201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.441003 4698 scope.go:117] "RemoveContainer" containerID="bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.441507 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c"} err="failed to get container status \"bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\": rpc error: code = NotFound desc = could not find container \"bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\": container with ID starting with bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.441560 4698 scope.go:117] "RemoveContainer" containerID="db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.442059 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e"} err="failed to get container status \"db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e\": rpc error: code = NotFound desc = could not find container \"db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e\": container with ID starting with db0881ca3f42a64a204ea8073632fba854bfaa6ad948d7e63929853303e9f34e not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.442086 4698 scope.go:117] "RemoveContainer" containerID="651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.442406 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27"} err="failed to get container status \"651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\": rpc error: code = NotFound desc = could not find container \"651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27\": container with ID starting with 651a64550e8158fca6f6c5e6f2b1f55da5dcb9fc21d413e27dc246e349415b27 not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.442464 4698 scope.go:117] "RemoveContainer" containerID="3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.443181 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb"} err="failed to get container status \"3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\": rpc error: code = NotFound desc = could not find container \"3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb\": container with ID starting with 3232b9403b806533d0c539f49dd41dd8e8cff01a947fce69efccfd34094c10bb not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.443212 4698 scope.go:117] "RemoveContainer" containerID="88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.443600 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b"} err="failed to get container status \"88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\": rpc error: code = NotFound desc = could not find container \"88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b\": container with ID starting with 88186574d3145f6bc199dd5dec4d8f0b47dbbb829aafdc57147f29cdee96705b not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.443687 4698 scope.go:117] "RemoveContainer" containerID="849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.444411 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703"} err="failed to get container status \"849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\": rpc error: code = NotFound desc = could not find container \"849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703\": container with ID starting with 849baee0c646a39c5f13aed475b051990546a889b58fc26e5009a333c6b23703 not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.444435 4698 scope.go:117] "RemoveContainer" containerID="234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.444981 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9"} err="failed to get container status \"234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\": rpc error: code = NotFound desc = could not find container \"234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9\": container with ID starting with 234273f7d2a82ba88f0bbd9646314448c8bdffbdc3d29dbdd3f6279b19404ae9 not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.445026 4698 scope.go:117] "RemoveContainer" containerID="dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.445734 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50"} err="failed to get container status \"dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\": rpc error: code = NotFound desc = could not find container \"dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50\": container with ID starting with dfe63a906929c0b75dd5858a1ec481c9c645f6f96aa0a7466882f0c420140a50 not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.445817 4698 scope.go:117] "RemoveContainer" containerID="6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.446409 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef"} err="failed to get container status \"6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\": rpc error: code = NotFound desc = could not find container \"6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef\": container with ID starting with 6fe86d6e72f6df8a9db1862f01a0bc5b63a8d6c259c4eff3ad9c423a3731dbef not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.446441 4698 scope.go:117] "RemoveContainer" containerID="201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.446933 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b"} err="failed to get container status \"201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\": rpc error: code = NotFound desc = could not find container \"201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b\": container with ID starting with 201aba38d98e7d70a7c412348bce02215e5fccd1de3f8f749c86dc9f8043b97b not found: ID does not exist" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.446983 4698 scope.go:117] "RemoveContainer" containerID="bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c" Feb 16 00:17:36 crc kubenswrapper[4698]: I0216 00:17:36.447449 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c"} err="failed to get container status \"bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\": rpc error: code = NotFound desc = could not find container \"bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c\": container with ID starting with bf294692028754306567425e82a46dcc7939283da870918c0eccd53d35c1630c not found: ID does not exist" Feb 16 00:17:37 crc kubenswrapper[4698]: I0216 00:17:37.134409 4698 generic.go:334] "Generic (PLEG): container finished" podID="2fa6db6a-990c-40b7-bcb8-43e715b34830" containerID="fd77e7a87beae83f6c594a23c0fe48b393cf844a023b9a6ba2ac04b129b3b1ed" exitCode=0 Feb 16 00:17:37 crc kubenswrapper[4698]: I0216 00:17:37.134497 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" event={"ID":"2fa6db6a-990c-40b7-bcb8-43e715b34830","Type":"ContainerDied","Data":"fd77e7a87beae83f6c594a23c0fe48b393cf844a023b9a6ba2ac04b129b3b1ed"} Feb 16 00:17:37 crc kubenswrapper[4698]: I0216 00:17:37.242777 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cea3368d-30b3-4bf5-8c91-a6b9c254eaf0" path="/var/lib/kubelet/pods/cea3368d-30b3-4bf5-8c91-a6b9c254eaf0/volumes" Feb 16 00:17:38 crc kubenswrapper[4698]: I0216 00:17:38.150276 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" event={"ID":"2fa6db6a-990c-40b7-bcb8-43e715b34830","Type":"ContainerStarted","Data":"7512465f48a9d874509e7ea13aec13e021fda31fe970c5453c09b1c0a361fadd"} Feb 16 00:17:38 crc kubenswrapper[4698]: I0216 00:17:38.150783 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" event={"ID":"2fa6db6a-990c-40b7-bcb8-43e715b34830","Type":"ContainerStarted","Data":"bf2e9cf39d1dae310c0c6174ff7ef7c4c2e97c7ec45d069ecf1ed9dfff887a9b"} Feb 16 00:17:38 crc kubenswrapper[4698]: I0216 00:17:38.150809 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" event={"ID":"2fa6db6a-990c-40b7-bcb8-43e715b34830","Type":"ContainerStarted","Data":"f6af6c92f4f47f32075c1d9de9539f20d94608ea405ef1e5a32e85acf58fd537"} Feb 16 00:17:38 crc kubenswrapper[4698]: I0216 00:17:38.150829 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" event={"ID":"2fa6db6a-990c-40b7-bcb8-43e715b34830","Type":"ContainerStarted","Data":"8268407fa4127570adc8c19e94415341458afdd7db30f1672e7a13bad3a23f4b"} Feb 16 00:17:38 crc kubenswrapper[4698]: I0216 00:17:38.150853 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" event={"ID":"2fa6db6a-990c-40b7-bcb8-43e715b34830","Type":"ContainerStarted","Data":"bdc728f886ca5331bde9a33be1cba114a1d0bc4f273d92e8f56bb1e3ed024920"} Feb 16 00:17:38 crc kubenswrapper[4698]: I0216 00:17:38.150873 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" event={"ID":"2fa6db6a-990c-40b7-bcb8-43e715b34830","Type":"ContainerStarted","Data":"ad101493b10b7bfa8ba94669931d7a50f9db14a403dd76846f5f45c656a026b3"} Feb 16 00:17:41 crc kubenswrapper[4698]: I0216 00:17:41.182278 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" event={"ID":"2fa6db6a-990c-40b7-bcb8-43e715b34830","Type":"ContainerStarted","Data":"fdde0be39120665900bf0281e08526449c9028d634c66802bc0f6b007316a291"} Feb 16 00:17:43 crc kubenswrapper[4698]: I0216 00:17:43.199623 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" event={"ID":"2fa6db6a-990c-40b7-bcb8-43e715b34830","Type":"ContainerStarted","Data":"b8643879c83c51ab979743734421030399996c872cb39e9faf37dfe30720cf79"} Feb 16 00:17:43 crc kubenswrapper[4698]: I0216 00:17:43.200153 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:43 crc kubenswrapper[4698]: I0216 00:17:43.200173 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:43 crc kubenswrapper[4698]: I0216 00:17:43.200186 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:43 crc kubenswrapper[4698]: I0216 00:17:43.240714 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:43 crc kubenswrapper[4698]: I0216 00:17:43.245687 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" podStartSLOduration=8.245661031 podStartE2EDuration="8.245661031s" podCreationTimestamp="2026-02-16 00:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:17:43.240592733 +0000 UTC m=+672.898491495" watchObservedRunningTime="2026-02-16 00:17:43.245661031 +0000 UTC m=+672.903559793" Feb 16 00:17:43 crc kubenswrapper[4698]: I0216 00:17:43.247466 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:17:49 crc kubenswrapper[4698]: I0216 00:17:49.233505 4698 scope.go:117] "RemoveContainer" containerID="89b1308232f81e46ec49509566a9454686396ff65a1b76bf4537910414500054" Feb 16 00:17:49 crc kubenswrapper[4698]: E0216 00:17:49.234847 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2dv2d_openshift-multus(69838a3a-c20d-4770-b95f-ab85a265d53c)\"" pod="openshift-multus/multus-2dv2d" podUID="69838a3a-c20d-4770-b95f-ab85a265d53c" Feb 16 00:18:01 crc kubenswrapper[4698]: I0216 00:18:01.234214 4698 scope.go:117] "RemoveContainer" containerID="89b1308232f81e46ec49509566a9454686396ff65a1b76bf4537910414500054" Feb 16 00:18:02 crc kubenswrapper[4698]: I0216 00:18:02.341332 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dv2d_69838a3a-c20d-4770-b95f-ab85a265d53c/kube-multus/2.log" Feb 16 00:18:02 crc kubenswrapper[4698]: I0216 00:18:02.341911 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dv2d" event={"ID":"69838a3a-c20d-4770-b95f-ab85a265d53c","Type":"ContainerStarted","Data":"1ea89a02ed64cc92b7c3e8ab91838f99465df2bf435152d0742b0a8f2fed96a6"} Feb 16 00:18:05 crc kubenswrapper[4698]: I0216 00:18:05.989014 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g2kh7" Feb 16 00:18:27 crc kubenswrapper[4698]: I0216 00:18:27.045761 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:18:27 crc kubenswrapper[4698]: I0216 00:18:27.045997 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.057668 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9lwt"] Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.058953 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w9lwt" podUID="15790639-5955-4eca-91d8-aab72bf25943" containerName="registry-server" containerID="cri-o://9b6cdc59739da10d4128cf1d92336194eb9682b30b18ce49011967bece1d58aa" gracePeriod=30 Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.446831 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9lwt" Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.470178 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15790639-5955-4eca-91d8-aab72bf25943-utilities\") pod \"15790639-5955-4eca-91d8-aab72bf25943\" (UID: \"15790639-5955-4eca-91d8-aab72bf25943\") " Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.470317 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15790639-5955-4eca-91d8-aab72bf25943-catalog-content\") pod \"15790639-5955-4eca-91d8-aab72bf25943\" (UID: \"15790639-5955-4eca-91d8-aab72bf25943\") " Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.470372 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm6vl\" (UniqueName: \"kubernetes.io/projected/15790639-5955-4eca-91d8-aab72bf25943-kube-api-access-mm6vl\") pod \"15790639-5955-4eca-91d8-aab72bf25943\" (UID: \"15790639-5955-4eca-91d8-aab72bf25943\") " Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.471805 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15790639-5955-4eca-91d8-aab72bf25943-utilities" (OuterVolumeSpecName: "utilities") pod "15790639-5955-4eca-91d8-aab72bf25943" (UID: "15790639-5955-4eca-91d8-aab72bf25943"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.479192 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15790639-5955-4eca-91d8-aab72bf25943-kube-api-access-mm6vl" (OuterVolumeSpecName: "kube-api-access-mm6vl") pod "15790639-5955-4eca-91d8-aab72bf25943" (UID: "15790639-5955-4eca-91d8-aab72bf25943"). InnerVolumeSpecName "kube-api-access-mm6vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.500044 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15790639-5955-4eca-91d8-aab72bf25943-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15790639-5955-4eca-91d8-aab72bf25943" (UID: "15790639-5955-4eca-91d8-aab72bf25943"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.572212 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15790639-5955-4eca-91d8-aab72bf25943-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.572272 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15790639-5955-4eca-91d8-aab72bf25943-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.572333 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm6vl\" (UniqueName: \"kubernetes.io/projected/15790639-5955-4eca-91d8-aab72bf25943-kube-api-access-mm6vl\") on node \"crc\" DevicePath \"\"" Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.590443 4698 generic.go:334] "Generic (PLEG): container finished" podID="15790639-5955-4eca-91d8-aab72bf25943" containerID="9b6cdc59739da10d4128cf1d92336194eb9682b30b18ce49011967bece1d58aa" exitCode=0 Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.590497 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9lwt" event={"ID":"15790639-5955-4eca-91d8-aab72bf25943","Type":"ContainerDied","Data":"9b6cdc59739da10d4128cf1d92336194eb9682b30b18ce49011967bece1d58aa"} Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.590537 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9lwt" event={"ID":"15790639-5955-4eca-91d8-aab72bf25943","Type":"ContainerDied","Data":"680087beb873d245a50d8063e6eca098f7b53e4278183200fec590327fc184b6"} Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.590557 4698 scope.go:117] "RemoveContainer" containerID="9b6cdc59739da10d4128cf1d92336194eb9682b30b18ce49011967bece1d58aa" Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.590586 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9lwt" Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.618044 4698 scope.go:117] "RemoveContainer" containerID="796586abf5d78adfdb9ca73a99a03942a0ef3a427734c96d165cbae0b8cf405c" Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.635017 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9lwt"] Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.639229 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9lwt"] Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.665855 4698 scope.go:117] "RemoveContainer" containerID="4c840b8f4eb9b5edecc524236ccf130bc7932643d8f5bd47ba2ebe97f11977b5" Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.685870 4698 scope.go:117] "RemoveContainer" containerID="9b6cdc59739da10d4128cf1d92336194eb9682b30b18ce49011967bece1d58aa" Feb 16 00:18:38 crc kubenswrapper[4698]: E0216 00:18:38.686700 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b6cdc59739da10d4128cf1d92336194eb9682b30b18ce49011967bece1d58aa\": container with ID starting with 9b6cdc59739da10d4128cf1d92336194eb9682b30b18ce49011967bece1d58aa not found: ID does not exist" containerID="9b6cdc59739da10d4128cf1d92336194eb9682b30b18ce49011967bece1d58aa" Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.686786 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b6cdc59739da10d4128cf1d92336194eb9682b30b18ce49011967bece1d58aa"} err="failed to get container status \"9b6cdc59739da10d4128cf1d92336194eb9682b30b18ce49011967bece1d58aa\": rpc error: code = NotFound desc = could not find container \"9b6cdc59739da10d4128cf1d92336194eb9682b30b18ce49011967bece1d58aa\": container with ID starting with 9b6cdc59739da10d4128cf1d92336194eb9682b30b18ce49011967bece1d58aa not found: ID does not exist" Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.686852 4698 scope.go:117] "RemoveContainer" containerID="796586abf5d78adfdb9ca73a99a03942a0ef3a427734c96d165cbae0b8cf405c" Feb 16 00:18:38 crc kubenswrapper[4698]: E0216 00:18:38.687373 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"796586abf5d78adfdb9ca73a99a03942a0ef3a427734c96d165cbae0b8cf405c\": container with ID starting with 796586abf5d78adfdb9ca73a99a03942a0ef3a427734c96d165cbae0b8cf405c not found: ID does not exist" containerID="796586abf5d78adfdb9ca73a99a03942a0ef3a427734c96d165cbae0b8cf405c" Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.687421 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"796586abf5d78adfdb9ca73a99a03942a0ef3a427734c96d165cbae0b8cf405c"} err="failed to get container status \"796586abf5d78adfdb9ca73a99a03942a0ef3a427734c96d165cbae0b8cf405c\": rpc error: code = NotFound desc = could not find container \"796586abf5d78adfdb9ca73a99a03942a0ef3a427734c96d165cbae0b8cf405c\": container with ID starting with 796586abf5d78adfdb9ca73a99a03942a0ef3a427734c96d165cbae0b8cf405c not found: ID does not exist" Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.687457 4698 scope.go:117] "RemoveContainer" containerID="4c840b8f4eb9b5edecc524236ccf130bc7932643d8f5bd47ba2ebe97f11977b5" Feb 16 00:18:38 crc kubenswrapper[4698]: E0216 00:18:38.688005 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c840b8f4eb9b5edecc524236ccf130bc7932643d8f5bd47ba2ebe97f11977b5\": container with ID starting with 4c840b8f4eb9b5edecc524236ccf130bc7932643d8f5bd47ba2ebe97f11977b5 not found: ID does not exist" containerID="4c840b8f4eb9b5edecc524236ccf130bc7932643d8f5bd47ba2ebe97f11977b5" Feb 16 00:18:38 crc kubenswrapper[4698]: I0216 00:18:38.688050 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c840b8f4eb9b5edecc524236ccf130bc7932643d8f5bd47ba2ebe97f11977b5"} err="failed to get container status \"4c840b8f4eb9b5edecc524236ccf130bc7932643d8f5bd47ba2ebe97f11977b5\": rpc error: code = NotFound desc = could not find container \"4c840b8f4eb9b5edecc524236ccf130bc7932643d8f5bd47ba2ebe97f11977b5\": container with ID starting with 4c840b8f4eb9b5edecc524236ccf130bc7932643d8f5bd47ba2ebe97f11977b5 not found: ID does not exist" Feb 16 00:18:39 crc kubenswrapper[4698]: I0216 00:18:39.242596 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15790639-5955-4eca-91d8-aab72bf25943" path="/var/lib/kubelet/pods/15790639-5955-4eca-91d8-aab72bf25943/volumes" Feb 16 00:18:42 crc kubenswrapper[4698]: I0216 00:18:42.031439 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2"] Feb 16 00:18:42 crc kubenswrapper[4698]: E0216 00:18:42.031782 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15790639-5955-4eca-91d8-aab72bf25943" containerName="extract-utilities" Feb 16 00:18:42 crc kubenswrapper[4698]: I0216 00:18:42.031801 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="15790639-5955-4eca-91d8-aab72bf25943" containerName="extract-utilities" Feb 16 00:18:42 crc kubenswrapper[4698]: E0216 00:18:42.031814 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15790639-5955-4eca-91d8-aab72bf25943" containerName="registry-server" Feb 16 00:18:42 crc kubenswrapper[4698]: I0216 00:18:42.031822 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="15790639-5955-4eca-91d8-aab72bf25943" containerName="registry-server" Feb 16 00:18:42 crc kubenswrapper[4698]: E0216 00:18:42.031845 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15790639-5955-4eca-91d8-aab72bf25943" containerName="extract-content" Feb 16 00:18:42 crc kubenswrapper[4698]: I0216 00:18:42.031853 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="15790639-5955-4eca-91d8-aab72bf25943" containerName="extract-content" Feb 16 00:18:42 crc kubenswrapper[4698]: I0216 00:18:42.031970 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="15790639-5955-4eca-91d8-aab72bf25943" containerName="registry-server" Feb 16 00:18:42 crc kubenswrapper[4698]: I0216 00:18:42.032944 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2" Feb 16 00:18:42 crc kubenswrapper[4698]: I0216 00:18:42.035821 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 00:18:42 crc kubenswrapper[4698]: I0216 00:18:42.052092 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2"] Feb 16 00:18:42 crc kubenswrapper[4698]: I0216 00:18:42.230536 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wfwm\" (UniqueName: \"kubernetes.io/projected/0910281b-5250-4f30-bd3b-966d88ce449a-kube-api-access-6wfwm\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2\" (UID: \"0910281b-5250-4f30-bd3b-966d88ce449a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2" Feb 16 00:18:42 crc kubenswrapper[4698]: I0216 00:18:42.231056 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0910281b-5250-4f30-bd3b-966d88ce449a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2\" (UID: \"0910281b-5250-4f30-bd3b-966d88ce449a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2" Feb 16 00:18:42 crc kubenswrapper[4698]: I0216 00:18:42.231234 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0910281b-5250-4f30-bd3b-966d88ce449a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2\" (UID: \"0910281b-5250-4f30-bd3b-966d88ce449a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2" Feb 16 00:18:42 crc kubenswrapper[4698]: I0216 00:18:42.332455 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0910281b-5250-4f30-bd3b-966d88ce449a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2\" (UID: \"0910281b-5250-4f30-bd3b-966d88ce449a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2" Feb 16 00:18:42 crc kubenswrapper[4698]: I0216 00:18:42.332532 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0910281b-5250-4f30-bd3b-966d88ce449a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2\" (UID: \"0910281b-5250-4f30-bd3b-966d88ce449a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2" Feb 16 00:18:42 crc kubenswrapper[4698]: I0216 00:18:42.332594 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wfwm\" (UniqueName: \"kubernetes.io/projected/0910281b-5250-4f30-bd3b-966d88ce449a-kube-api-access-6wfwm\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2\" (UID: \"0910281b-5250-4f30-bd3b-966d88ce449a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2" Feb 16 00:18:42 crc kubenswrapper[4698]: I0216 00:18:42.333312 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0910281b-5250-4f30-bd3b-966d88ce449a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2\" (UID: \"0910281b-5250-4f30-bd3b-966d88ce449a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2" Feb 16 00:18:42 crc kubenswrapper[4698]: I0216 00:18:42.333408 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0910281b-5250-4f30-bd3b-966d88ce449a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2\" (UID: \"0910281b-5250-4f30-bd3b-966d88ce449a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2" Feb 16 00:18:42 crc kubenswrapper[4698]: I0216 00:18:42.358917 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wfwm\" (UniqueName: \"kubernetes.io/projected/0910281b-5250-4f30-bd3b-966d88ce449a-kube-api-access-6wfwm\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2\" (UID: \"0910281b-5250-4f30-bd3b-966d88ce449a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2" Feb 16 00:18:42 crc kubenswrapper[4698]: I0216 00:18:42.648486 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2" Feb 16 00:18:42 crc kubenswrapper[4698]: I0216 00:18:42.947209 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2"] Feb 16 00:18:43 crc kubenswrapper[4698]: I0216 00:18:43.625022 4698 generic.go:334] "Generic (PLEG): container finished" podID="0910281b-5250-4f30-bd3b-966d88ce449a" containerID="32b4d9183467c0524e4866c136c7afc978efe831cc42568b0f62f266da9b093d" exitCode=0 Feb 16 00:18:43 crc kubenswrapper[4698]: I0216 00:18:43.625080 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2" event={"ID":"0910281b-5250-4f30-bd3b-966d88ce449a","Type":"ContainerDied","Data":"32b4d9183467c0524e4866c136c7afc978efe831cc42568b0f62f266da9b093d"} Feb 16 00:18:43 crc kubenswrapper[4698]: I0216 00:18:43.625115 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2" event={"ID":"0910281b-5250-4f30-bd3b-966d88ce449a","Type":"ContainerStarted","Data":"adf83c46ff48c1dfcc8c032e8900c0bb99a73ea07962ed05fcc30e46e8163144"} Feb 16 00:18:43 crc kubenswrapper[4698]: I0216 00:18:43.628078 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 00:18:45 crc kubenswrapper[4698]: I0216 00:18:45.640964 4698 generic.go:334] "Generic (PLEG): container finished" podID="0910281b-5250-4f30-bd3b-966d88ce449a" containerID="319c6d09b4d90d339d9ae5195093c3d55b492e73475492938dba2352324bdac2" exitCode=0 Feb 16 00:18:45 crc kubenswrapper[4698]: I0216 00:18:45.641026 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2" event={"ID":"0910281b-5250-4f30-bd3b-966d88ce449a","Type":"ContainerDied","Data":"319c6d09b4d90d339d9ae5195093c3d55b492e73475492938dba2352324bdac2"} Feb 16 00:18:46 crc kubenswrapper[4698]: I0216 00:18:46.652375 4698 generic.go:334] "Generic (PLEG): container finished" podID="0910281b-5250-4f30-bd3b-966d88ce449a" containerID="2f23be51c85376a68546f3e82497d0bd9d676b283cd9ef5d0891a5d5baa5e6c4" exitCode=0 Feb 16 00:18:46 crc kubenswrapper[4698]: I0216 00:18:46.652444 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2" event={"ID":"0910281b-5250-4f30-bd3b-966d88ce449a","Type":"ContainerDied","Data":"2f23be51c85376a68546f3e82497d0bd9d676b283cd9ef5d0891a5d5baa5e6c4"} Feb 16 00:18:47 crc kubenswrapper[4698]: I0216 00:18:47.984911 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2" Feb 16 00:18:47 crc kubenswrapper[4698]: I0216 00:18:47.987276 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx"] Feb 16 00:18:47 crc kubenswrapper[4698]: E0216 00:18:47.987606 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0910281b-5250-4f30-bd3b-966d88ce449a" containerName="extract" Feb 16 00:18:47 crc kubenswrapper[4698]: I0216 00:18:47.987663 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0910281b-5250-4f30-bd3b-966d88ce449a" containerName="extract" Feb 16 00:18:47 crc kubenswrapper[4698]: E0216 00:18:47.987682 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0910281b-5250-4f30-bd3b-966d88ce449a" containerName="pull" Feb 16 00:18:47 crc kubenswrapper[4698]: I0216 00:18:47.987692 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0910281b-5250-4f30-bd3b-966d88ce449a" containerName="pull" Feb 16 00:18:47 crc kubenswrapper[4698]: E0216 00:18:47.987708 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0910281b-5250-4f30-bd3b-966d88ce449a" containerName="util" Feb 16 00:18:47 crc kubenswrapper[4698]: I0216 00:18:47.987719 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0910281b-5250-4f30-bd3b-966d88ce449a" containerName="util" Feb 16 00:18:47 crc kubenswrapper[4698]: I0216 00:18:47.987855 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0910281b-5250-4f30-bd3b-966d88ce449a" containerName="extract" Feb 16 00:18:47 crc kubenswrapper[4698]: I0216 00:18:47.989004 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx" Feb 16 00:18:47 crc kubenswrapper[4698]: I0216 00:18:47.996949 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx"] Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.111425 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wfwm\" (UniqueName: \"kubernetes.io/projected/0910281b-5250-4f30-bd3b-966d88ce449a-kube-api-access-6wfwm\") pod \"0910281b-5250-4f30-bd3b-966d88ce449a\" (UID: \"0910281b-5250-4f30-bd3b-966d88ce449a\") " Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.112137 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0910281b-5250-4f30-bd3b-966d88ce449a-util\") pod \"0910281b-5250-4f30-bd3b-966d88ce449a\" (UID: \"0910281b-5250-4f30-bd3b-966d88ce449a\") " Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.112388 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0910281b-5250-4f30-bd3b-966d88ce449a-bundle\") pod \"0910281b-5250-4f30-bd3b-966d88ce449a\" (UID: \"0910281b-5250-4f30-bd3b-966d88ce449a\") " Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.112835 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b9a8da53-3db9-4151-9170-1ec4f853c766-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx\" (UID: \"b9a8da53-3db9-4151-9170-1ec4f853c766\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx" Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.113534 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf4pm\" (UniqueName: \"kubernetes.io/projected/b9a8da53-3db9-4151-9170-1ec4f853c766-kube-api-access-kf4pm\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx\" (UID: \"b9a8da53-3db9-4151-9170-1ec4f853c766\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx" Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.114242 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b9a8da53-3db9-4151-9170-1ec4f853c766-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx\" (UID: \"b9a8da53-3db9-4151-9170-1ec4f853c766\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx" Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.116884 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0910281b-5250-4f30-bd3b-966d88ce449a-bundle" (OuterVolumeSpecName: "bundle") pod "0910281b-5250-4f30-bd3b-966d88ce449a" (UID: "0910281b-5250-4f30-bd3b-966d88ce449a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.120577 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0910281b-5250-4f30-bd3b-966d88ce449a-kube-api-access-6wfwm" (OuterVolumeSpecName: "kube-api-access-6wfwm") pod "0910281b-5250-4f30-bd3b-966d88ce449a" (UID: "0910281b-5250-4f30-bd3b-966d88ce449a"). InnerVolumeSpecName "kube-api-access-6wfwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.143450 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0910281b-5250-4f30-bd3b-966d88ce449a-util" (OuterVolumeSpecName: "util") pod "0910281b-5250-4f30-bd3b-966d88ce449a" (UID: "0910281b-5250-4f30-bd3b-966d88ce449a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.215419 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf4pm\" (UniqueName: \"kubernetes.io/projected/b9a8da53-3db9-4151-9170-1ec4f853c766-kube-api-access-kf4pm\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx\" (UID: \"b9a8da53-3db9-4151-9170-1ec4f853c766\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx" Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.215509 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b9a8da53-3db9-4151-9170-1ec4f853c766-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx\" (UID: \"b9a8da53-3db9-4151-9170-1ec4f853c766\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx" Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.215579 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b9a8da53-3db9-4151-9170-1ec4f853c766-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx\" (UID: \"b9a8da53-3db9-4151-9170-1ec4f853c766\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx" Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.215739 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wfwm\" (UniqueName: \"kubernetes.io/projected/0910281b-5250-4f30-bd3b-966d88ce449a-kube-api-access-6wfwm\") on node \"crc\" DevicePath \"\"" Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.215767 4698 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0910281b-5250-4f30-bd3b-966d88ce449a-util\") on node \"crc\" DevicePath \"\"" Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.215786 4698 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0910281b-5250-4f30-bd3b-966d88ce449a-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.216773 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b9a8da53-3db9-4151-9170-1ec4f853c766-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx\" (UID: \"b9a8da53-3db9-4151-9170-1ec4f853c766\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx" Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.216993 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b9a8da53-3db9-4151-9170-1ec4f853c766-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx\" (UID: \"b9a8da53-3db9-4151-9170-1ec4f853c766\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx" Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.237273 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf4pm\" (UniqueName: \"kubernetes.io/projected/b9a8da53-3db9-4151-9170-1ec4f853c766-kube-api-access-kf4pm\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx\" (UID: \"b9a8da53-3db9-4151-9170-1ec4f853c766\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx" Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.326035 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx" Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.571532 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx"] Feb 16 00:18:48 crc kubenswrapper[4698]: W0216 00:18:48.576531 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9a8da53_3db9_4151_9170_1ec4f853c766.slice/crio-66463e6d652ca159d840feddafa557b5ac01f6b30d35280caf791f2eb6c59c16 WatchSource:0}: Error finding container 66463e6d652ca159d840feddafa557b5ac01f6b30d35280caf791f2eb6c59c16: Status 404 returned error can't find the container with id 66463e6d652ca159d840feddafa557b5ac01f6b30d35280caf791f2eb6c59c16 Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.669569 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx" event={"ID":"b9a8da53-3db9-4151-9170-1ec4f853c766","Type":"ContainerStarted","Data":"66463e6d652ca159d840feddafa557b5ac01f6b30d35280caf791f2eb6c59c16"} Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.673405 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2" event={"ID":"0910281b-5250-4f30-bd3b-966d88ce449a","Type":"ContainerDied","Data":"adf83c46ff48c1dfcc8c032e8900c0bb99a73ea07962ed05fcc30e46e8163144"} Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.673466 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adf83c46ff48c1dfcc8c032e8900c0bb99a73ea07962ed05fcc30e46e8163144" Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.673551 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2" Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.980601 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4"] Feb 16 00:18:48 crc kubenswrapper[4698]: I0216 00:18:48.981760 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4" Feb 16 00:18:49 crc kubenswrapper[4698]: I0216 00:18:49.007067 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4"] Feb 16 00:18:49 crc kubenswrapper[4698]: I0216 00:18:49.139962 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/18c8a3ac-1fef-4511-baac-d9ca8e2b7a49-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4\" (UID: \"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4" Feb 16 00:18:49 crc kubenswrapper[4698]: I0216 00:18:49.140044 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55vs9\" (UniqueName: \"kubernetes.io/projected/18c8a3ac-1fef-4511-baac-d9ca8e2b7a49-kube-api-access-55vs9\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4\" (UID: \"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4" Feb 16 00:18:49 crc kubenswrapper[4698]: I0216 00:18:49.140096 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/18c8a3ac-1fef-4511-baac-d9ca8e2b7a49-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4\" (UID: \"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4" Feb 16 00:18:49 crc kubenswrapper[4698]: I0216 00:18:49.240866 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/18c8a3ac-1fef-4511-baac-d9ca8e2b7a49-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4\" (UID: \"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4" Feb 16 00:18:49 crc kubenswrapper[4698]: I0216 00:18:49.240934 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55vs9\" (UniqueName: \"kubernetes.io/projected/18c8a3ac-1fef-4511-baac-d9ca8e2b7a49-kube-api-access-55vs9\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4\" (UID: \"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4" Feb 16 00:18:49 crc kubenswrapper[4698]: I0216 00:18:49.240978 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/18c8a3ac-1fef-4511-baac-d9ca8e2b7a49-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4\" (UID: \"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4" Feb 16 00:18:49 crc kubenswrapper[4698]: I0216 00:18:49.241542 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/18c8a3ac-1fef-4511-baac-d9ca8e2b7a49-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4\" (UID: \"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4" Feb 16 00:18:49 crc kubenswrapper[4698]: I0216 00:18:49.241919 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/18c8a3ac-1fef-4511-baac-d9ca8e2b7a49-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4\" (UID: \"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4" Feb 16 00:18:49 crc kubenswrapper[4698]: I0216 00:18:49.272193 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55vs9\" (UniqueName: \"kubernetes.io/projected/18c8a3ac-1fef-4511-baac-d9ca8e2b7a49-kube-api-access-55vs9\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4\" (UID: \"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4" Feb 16 00:18:49 crc kubenswrapper[4698]: I0216 00:18:49.320703 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4" Feb 16 00:18:49 crc kubenswrapper[4698]: I0216 00:18:49.680755 4698 generic.go:334] "Generic (PLEG): container finished" podID="b9a8da53-3db9-4151-9170-1ec4f853c766" containerID="df40492f36ffe3a47dbedd63472630a336abaa8ec6826f19163807139947da6d" exitCode=0 Feb 16 00:18:49 crc kubenswrapper[4698]: I0216 00:18:49.680959 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx" event={"ID":"b9a8da53-3db9-4151-9170-1ec4f853c766","Type":"ContainerDied","Data":"df40492f36ffe3a47dbedd63472630a336abaa8ec6826f19163807139947da6d"} Feb 16 00:18:49 crc kubenswrapper[4698]: I0216 00:18:49.835544 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4"] Feb 16 00:18:50 crc kubenswrapper[4698]: I0216 00:18:50.688795 4698 generic.go:334] "Generic (PLEG): container finished" podID="18c8a3ac-1fef-4511-baac-d9ca8e2b7a49" containerID="3d19e465f5cd248787cbef97d6d5fad1588b5879a88f75fe77f23643e762f8cd" exitCode=0 Feb 16 00:18:50 crc kubenswrapper[4698]: I0216 00:18:50.688856 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4" event={"ID":"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49","Type":"ContainerDied","Data":"3d19e465f5cd248787cbef97d6d5fad1588b5879a88f75fe77f23643e762f8cd"} Feb 16 00:18:50 crc kubenswrapper[4698]: I0216 00:18:50.689228 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4" event={"ID":"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49","Type":"ContainerStarted","Data":"f02d4fa30ecd8fb79505b6bba4c056c52f77383556738ff8bf44df4d03a5bf89"} Feb 16 00:18:50 crc kubenswrapper[4698]: I0216 00:18:50.692572 4698 generic.go:334] "Generic (PLEG): container finished" podID="b9a8da53-3db9-4151-9170-1ec4f853c766" containerID="4e911e46c1a1d927938fbef30e81fc41aef779b3ae75df9383f4642ed7f60c02" exitCode=0 Feb 16 00:18:50 crc kubenswrapper[4698]: I0216 00:18:50.692661 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx" event={"ID":"b9a8da53-3db9-4151-9170-1ec4f853c766","Type":"ContainerDied","Data":"4e911e46c1a1d927938fbef30e81fc41aef779b3ae75df9383f4642ed7f60c02"} Feb 16 00:18:51 crc kubenswrapper[4698]: I0216 00:18:51.721962 4698 generic.go:334] "Generic (PLEG): container finished" podID="b9a8da53-3db9-4151-9170-1ec4f853c766" containerID="5c95ef86903c6da811fac05af9816d7cb2f6305b733e3f75c5edc711355d1ab3" exitCode=0 Feb 16 00:18:51 crc kubenswrapper[4698]: I0216 00:18:51.722672 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx" event={"ID":"b9a8da53-3db9-4151-9170-1ec4f853c766","Type":"ContainerDied","Data":"5c95ef86903c6da811fac05af9816d7cb2f6305b733e3f75c5edc711355d1ab3"} Feb 16 00:18:52 crc kubenswrapper[4698]: I0216 00:18:52.731860 4698 generic.go:334] "Generic (PLEG): container finished" podID="18c8a3ac-1fef-4511-baac-d9ca8e2b7a49" containerID="d3da4733bcdac8e86a11cd7a7ecab55d0e21dc29bd698a61c3d89993f55f8923" exitCode=0 Feb 16 00:18:52 crc kubenswrapper[4698]: I0216 00:18:52.731948 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4" event={"ID":"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49","Type":"ContainerDied","Data":"d3da4733bcdac8e86a11cd7a7ecab55d0e21dc29bd698a61c3d89993f55f8923"} Feb 16 00:18:53 crc kubenswrapper[4698]: I0216 00:18:53.175036 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx" Feb 16 00:18:53 crc kubenswrapper[4698]: I0216 00:18:53.321129 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf4pm\" (UniqueName: \"kubernetes.io/projected/b9a8da53-3db9-4151-9170-1ec4f853c766-kube-api-access-kf4pm\") pod \"b9a8da53-3db9-4151-9170-1ec4f853c766\" (UID: \"b9a8da53-3db9-4151-9170-1ec4f853c766\") " Feb 16 00:18:53 crc kubenswrapper[4698]: I0216 00:18:53.321634 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b9a8da53-3db9-4151-9170-1ec4f853c766-util\") pod \"b9a8da53-3db9-4151-9170-1ec4f853c766\" (UID: \"b9a8da53-3db9-4151-9170-1ec4f853c766\") " Feb 16 00:18:53 crc kubenswrapper[4698]: I0216 00:18:53.321760 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b9a8da53-3db9-4151-9170-1ec4f853c766-bundle\") pod \"b9a8da53-3db9-4151-9170-1ec4f853c766\" (UID: \"b9a8da53-3db9-4151-9170-1ec4f853c766\") " Feb 16 00:18:53 crc kubenswrapper[4698]: I0216 00:18:53.323575 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9a8da53-3db9-4151-9170-1ec4f853c766-bundle" (OuterVolumeSpecName: "bundle") pod "b9a8da53-3db9-4151-9170-1ec4f853c766" (UID: "b9a8da53-3db9-4151-9170-1ec4f853c766"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:18:53 crc kubenswrapper[4698]: I0216 00:18:53.328826 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9a8da53-3db9-4151-9170-1ec4f853c766-kube-api-access-kf4pm" (OuterVolumeSpecName: "kube-api-access-kf4pm") pod "b9a8da53-3db9-4151-9170-1ec4f853c766" (UID: "b9a8da53-3db9-4151-9170-1ec4f853c766"). InnerVolumeSpecName "kube-api-access-kf4pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:18:53 crc kubenswrapper[4698]: I0216 00:18:53.347892 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9a8da53-3db9-4151-9170-1ec4f853c766-util" (OuterVolumeSpecName: "util") pod "b9a8da53-3db9-4151-9170-1ec4f853c766" (UID: "b9a8da53-3db9-4151-9170-1ec4f853c766"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:18:53 crc kubenswrapper[4698]: I0216 00:18:53.423101 4698 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b9a8da53-3db9-4151-9170-1ec4f853c766-util\") on node \"crc\" DevicePath \"\"" Feb 16 00:18:53 crc kubenswrapper[4698]: I0216 00:18:53.423349 4698 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b9a8da53-3db9-4151-9170-1ec4f853c766-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:18:53 crc kubenswrapper[4698]: I0216 00:18:53.423364 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf4pm\" (UniqueName: \"kubernetes.io/projected/b9a8da53-3db9-4151-9170-1ec4f853c766-kube-api-access-kf4pm\") on node \"crc\" DevicePath \"\"" Feb 16 00:18:53 crc kubenswrapper[4698]: I0216 00:18:53.741350 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx" event={"ID":"b9a8da53-3db9-4151-9170-1ec4f853c766","Type":"ContainerDied","Data":"66463e6d652ca159d840feddafa557b5ac01f6b30d35280caf791f2eb6c59c16"} Feb 16 00:18:53 crc kubenswrapper[4698]: I0216 00:18:53.741389 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx" Feb 16 00:18:53 crc kubenswrapper[4698]: I0216 00:18:53.741391 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66463e6d652ca159d840feddafa557b5ac01f6b30d35280caf791f2eb6c59c16" Feb 16 00:18:53 crc kubenswrapper[4698]: I0216 00:18:53.743380 4698 generic.go:334] "Generic (PLEG): container finished" podID="18c8a3ac-1fef-4511-baac-d9ca8e2b7a49" containerID="94804a0ce3b8397f4a27fa19bce2b228f28817613c7f233bb0d3835fb112b104" exitCode=0 Feb 16 00:18:53 crc kubenswrapper[4698]: I0216 00:18:53.743410 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4" event={"ID":"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49","Type":"ContainerDied","Data":"94804a0ce3b8397f4a27fa19bce2b228f28817613c7f233bb0d3835fb112b104"} Feb 16 00:18:54 crc kubenswrapper[4698]: I0216 00:18:54.995915 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4" Feb 16 00:18:55 crc kubenswrapper[4698]: I0216 00:18:55.144049 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55vs9\" (UniqueName: \"kubernetes.io/projected/18c8a3ac-1fef-4511-baac-d9ca8e2b7a49-kube-api-access-55vs9\") pod \"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49\" (UID: \"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49\") " Feb 16 00:18:55 crc kubenswrapper[4698]: I0216 00:18:55.144123 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/18c8a3ac-1fef-4511-baac-d9ca8e2b7a49-util\") pod \"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49\" (UID: \"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49\") " Feb 16 00:18:55 crc kubenswrapper[4698]: I0216 00:18:55.144211 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/18c8a3ac-1fef-4511-baac-d9ca8e2b7a49-bundle\") pod \"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49\" (UID: \"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49\") " Feb 16 00:18:55 crc kubenswrapper[4698]: I0216 00:18:55.145318 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18c8a3ac-1fef-4511-baac-d9ca8e2b7a49-bundle" (OuterVolumeSpecName: "bundle") pod "18c8a3ac-1fef-4511-baac-d9ca8e2b7a49" (UID: "18c8a3ac-1fef-4511-baac-d9ca8e2b7a49"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:18:55 crc kubenswrapper[4698]: I0216 00:18:55.152770 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18c8a3ac-1fef-4511-baac-d9ca8e2b7a49-kube-api-access-55vs9" (OuterVolumeSpecName: "kube-api-access-55vs9") pod "18c8a3ac-1fef-4511-baac-d9ca8e2b7a49" (UID: "18c8a3ac-1fef-4511-baac-d9ca8e2b7a49"). InnerVolumeSpecName "kube-api-access-55vs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:18:55 crc kubenswrapper[4698]: I0216 00:18:55.170804 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18c8a3ac-1fef-4511-baac-d9ca8e2b7a49-util" (OuterVolumeSpecName: "util") pod "18c8a3ac-1fef-4511-baac-d9ca8e2b7a49" (UID: "18c8a3ac-1fef-4511-baac-d9ca8e2b7a49"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:18:55 crc kubenswrapper[4698]: I0216 00:18:55.246900 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55vs9\" (UniqueName: \"kubernetes.io/projected/18c8a3ac-1fef-4511-baac-d9ca8e2b7a49-kube-api-access-55vs9\") on node \"crc\" DevicePath \"\"" Feb 16 00:18:55 crc kubenswrapper[4698]: I0216 00:18:55.246932 4698 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/18c8a3ac-1fef-4511-baac-d9ca8e2b7a49-util\") on node \"crc\" DevicePath \"\"" Feb 16 00:18:55 crc kubenswrapper[4698]: I0216 00:18:55.246943 4698 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/18c8a3ac-1fef-4511-baac-d9ca8e2b7a49-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:18:55 crc kubenswrapper[4698]: I0216 00:18:55.798965 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4" event={"ID":"18c8a3ac-1fef-4511-baac-d9ca8e2b7a49","Type":"ContainerDied","Data":"f02d4fa30ecd8fb79505b6bba4c056c52f77383556738ff8bf44df4d03a5bf89"} Feb 16 00:18:55 crc kubenswrapper[4698]: I0216 00:18:55.799022 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f02d4fa30ecd8fb79505b6bba4c056c52f77383556738ff8bf44df4d03a5bf89" Feb 16 00:18:55 crc kubenswrapper[4698]: I0216 00:18:55.799111 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.781934 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk"] Feb 16 00:18:56 crc kubenswrapper[4698]: E0216 00:18:56.782427 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c8a3ac-1fef-4511-baac-d9ca8e2b7a49" containerName="extract" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.782440 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c8a3ac-1fef-4511-baac-d9ca8e2b7a49" containerName="extract" Feb 16 00:18:56 crc kubenswrapper[4698]: E0216 00:18:56.782449 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c8a3ac-1fef-4511-baac-d9ca8e2b7a49" containerName="util" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.782455 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c8a3ac-1fef-4511-baac-d9ca8e2b7a49" containerName="util" Feb 16 00:18:56 crc kubenswrapper[4698]: E0216 00:18:56.782465 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9a8da53-3db9-4151-9170-1ec4f853c766" containerName="pull" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.782470 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9a8da53-3db9-4151-9170-1ec4f853c766" containerName="pull" Feb 16 00:18:56 crc kubenswrapper[4698]: E0216 00:18:56.782478 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c8a3ac-1fef-4511-baac-d9ca8e2b7a49" containerName="pull" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.782484 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c8a3ac-1fef-4511-baac-d9ca8e2b7a49" containerName="pull" Feb 16 00:18:56 crc kubenswrapper[4698]: E0216 00:18:56.782496 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9a8da53-3db9-4151-9170-1ec4f853c766" containerName="util" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.782501 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9a8da53-3db9-4151-9170-1ec4f853c766" containerName="util" Feb 16 00:18:56 crc kubenswrapper[4698]: E0216 00:18:56.782511 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9a8da53-3db9-4151-9170-1ec4f853c766" containerName="extract" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.782517 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9a8da53-3db9-4151-9170-1ec4f853c766" containerName="extract" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.782604 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c8a3ac-1fef-4511-baac-d9ca8e2b7a49" containerName="extract" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.782632 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9a8da53-3db9-4151-9170-1ec4f853c766" containerName="extract" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.783404 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.785555 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.803882 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk"] Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.868126 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31482fcb-ffd3-40fe-a5fc-5b21d6b522ce-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk\" (UID: \"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.868174 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98mqh\" (UniqueName: \"kubernetes.io/projected/31482fcb-ffd3-40fe-a5fc-5b21d6b522ce-kube-api-access-98mqh\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk\" (UID: \"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.868253 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31482fcb-ffd3-40fe-a5fc-5b21d6b522ce-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk\" (UID: \"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.969347 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31482fcb-ffd3-40fe-a5fc-5b21d6b522ce-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk\" (UID: \"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.969457 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31482fcb-ffd3-40fe-a5fc-5b21d6b522ce-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk\" (UID: \"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.969491 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98mqh\" (UniqueName: \"kubernetes.io/projected/31482fcb-ffd3-40fe-a5fc-5b21d6b522ce-kube-api-access-98mqh\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk\" (UID: \"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.970035 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31482fcb-ffd3-40fe-a5fc-5b21d6b522ce-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk\" (UID: \"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.970230 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31482fcb-ffd3-40fe-a5fc-5b21d6b522ce-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk\" (UID: \"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk" Feb 16 00:18:56 crc kubenswrapper[4698]: I0216 00:18:56.995689 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98mqh\" (UniqueName: \"kubernetes.io/projected/31482fcb-ffd3-40fe-a5fc-5b21d6b522ce-kube-api-access-98mqh\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk\" (UID: \"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk" Feb 16 00:18:57 crc kubenswrapper[4698]: I0216 00:18:57.045861 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:18:57 crc kubenswrapper[4698]: I0216 00:18:57.045957 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:18:57 crc kubenswrapper[4698]: I0216 00:18:57.103557 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk" Feb 16 00:18:57 crc kubenswrapper[4698]: I0216 00:18:57.347469 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk"] Feb 16 00:18:57 crc kubenswrapper[4698]: I0216 00:18:57.813869 4698 generic.go:334] "Generic (PLEG): container finished" podID="31482fcb-ffd3-40fe-a5fc-5b21d6b522ce" containerID="e26bd973b4ab288060281295d5cf43627f9a382dd7cdd2fc0beefe44583f9098" exitCode=0 Feb 16 00:18:57 crc kubenswrapper[4698]: I0216 00:18:57.813934 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk" event={"ID":"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce","Type":"ContainerDied","Data":"e26bd973b4ab288060281295d5cf43627f9a382dd7cdd2fc0beefe44583f9098"} Feb 16 00:18:57 crc kubenswrapper[4698]: I0216 00:18:57.813968 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk" event={"ID":"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce","Type":"ContainerStarted","Data":"3bf041e1a026d4a7b05687ff7157d0cc66565aa35acb004116220ebb4b621c96"} Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.334019 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-qkpw2"] Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.335339 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qkpw2" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.338575 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-z29n7" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.338804 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.339743 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.349397 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-qkpw2"] Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.453166 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-qkdb5"] Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.454327 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-qkdb5" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.458362 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.458455 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-lpj76" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.489260 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-7rnfm"] Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.490069 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-7rnfm" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.492596 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f8612dc6-5549-459e-8e2d-16851e88463c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-676f96946c-7rnfm\" (UID: \"f8612dc6-5549-459e-8e2d-16851e88463c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-7rnfm" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.492682 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f8612dc6-5549-459e-8e2d-16851e88463c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-676f96946c-7rnfm\" (UID: \"f8612dc6-5549-459e-8e2d-16851e88463c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-7rnfm" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.492715 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6dcc2d08-cb5c-43ba-b568-992bfcbf9ed4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-676f96946c-qkdb5\" (UID: \"6dcc2d08-cb5c-43ba-b568-992bfcbf9ed4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-qkdb5" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.492819 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hng4d\" (UniqueName: \"kubernetes.io/projected/8d84ce9c-7712-4137-8b1e-d5c2ce3b413b-kube-api-access-hng4d\") pod \"obo-prometheus-operator-68bc856cb9-qkpw2\" (UID: \"8d84ce9c-7712-4137-8b1e-d5c2ce3b413b\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qkpw2" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.492851 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6dcc2d08-cb5c-43ba-b568-992bfcbf9ed4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-676f96946c-qkdb5\" (UID: \"6dcc2d08-cb5c-43ba-b568-992bfcbf9ed4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-qkdb5" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.500022 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-7rnfm"] Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.510561 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-qkdb5"] Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.593854 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hng4d\" (UniqueName: \"kubernetes.io/projected/8d84ce9c-7712-4137-8b1e-d5c2ce3b413b-kube-api-access-hng4d\") pod \"obo-prometheus-operator-68bc856cb9-qkpw2\" (UID: \"8d84ce9c-7712-4137-8b1e-d5c2ce3b413b\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qkpw2" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.594170 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6dcc2d08-cb5c-43ba-b568-992bfcbf9ed4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-676f96946c-qkdb5\" (UID: \"6dcc2d08-cb5c-43ba-b568-992bfcbf9ed4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-qkdb5" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.594393 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f8612dc6-5549-459e-8e2d-16851e88463c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-676f96946c-7rnfm\" (UID: \"f8612dc6-5549-459e-8e2d-16851e88463c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-7rnfm" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.594527 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f8612dc6-5549-459e-8e2d-16851e88463c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-676f96946c-7rnfm\" (UID: \"f8612dc6-5549-459e-8e2d-16851e88463c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-7rnfm" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.594571 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6dcc2d08-cb5c-43ba-b568-992bfcbf9ed4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-676f96946c-qkdb5\" (UID: \"6dcc2d08-cb5c-43ba-b568-992bfcbf9ed4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-qkdb5" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.603803 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6dcc2d08-cb5c-43ba-b568-992bfcbf9ed4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-676f96946c-qkdb5\" (UID: \"6dcc2d08-cb5c-43ba-b568-992bfcbf9ed4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-qkdb5" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.607789 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f8612dc6-5549-459e-8e2d-16851e88463c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-676f96946c-7rnfm\" (UID: \"f8612dc6-5549-459e-8e2d-16851e88463c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-7rnfm" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.613912 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hng4d\" (UniqueName: \"kubernetes.io/projected/8d84ce9c-7712-4137-8b1e-d5c2ce3b413b-kube-api-access-hng4d\") pod \"obo-prometheus-operator-68bc856cb9-qkpw2\" (UID: \"8d84ce9c-7712-4137-8b1e-d5c2ce3b413b\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qkpw2" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.620415 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6dcc2d08-cb5c-43ba-b568-992bfcbf9ed4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-676f96946c-qkdb5\" (UID: \"6dcc2d08-cb5c-43ba-b568-992bfcbf9ed4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-qkdb5" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.623158 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f8612dc6-5549-459e-8e2d-16851e88463c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-676f96946c-7rnfm\" (UID: \"f8612dc6-5549-459e-8e2d-16851e88463c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-7rnfm" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.656817 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qkpw2" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.670212 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-5dhfb"] Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.671527 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5dhfb" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.674679 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-zq2zf" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.675164 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.695938 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/03ed7c21-b695-42b0-a85e-3dec0cb7595c-observability-operator-tls\") pod \"observability-operator-59bdc8b94-5dhfb\" (UID: \"03ed7c21-b695-42b0-a85e-3dec0cb7595c\") " pod="openshift-operators/observability-operator-59bdc8b94-5dhfb" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.696125 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m5q7\" (UniqueName: \"kubernetes.io/projected/03ed7c21-b695-42b0-a85e-3dec0cb7595c-kube-api-access-7m5q7\") pod \"observability-operator-59bdc8b94-5dhfb\" (UID: \"03ed7c21-b695-42b0-a85e-3dec0cb7595c\") " pod="openshift-operators/observability-operator-59bdc8b94-5dhfb" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.701603 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-5dhfb"] Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.771929 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-qkdb5" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.800750 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m5q7\" (UniqueName: \"kubernetes.io/projected/03ed7c21-b695-42b0-a85e-3dec0cb7595c-kube-api-access-7m5q7\") pod \"observability-operator-59bdc8b94-5dhfb\" (UID: \"03ed7c21-b695-42b0-a85e-3dec0cb7595c\") " pod="openshift-operators/observability-operator-59bdc8b94-5dhfb" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.800800 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/03ed7c21-b695-42b0-a85e-3dec0cb7595c-observability-operator-tls\") pod \"observability-operator-59bdc8b94-5dhfb\" (UID: \"03ed7c21-b695-42b0-a85e-3dec0cb7595c\") " pod="openshift-operators/observability-operator-59bdc8b94-5dhfb" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.805845 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-7rnfm" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.807321 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/03ed7c21-b695-42b0-a85e-3dec0cb7595c-observability-operator-tls\") pod \"observability-operator-59bdc8b94-5dhfb\" (UID: \"03ed7c21-b695-42b0-a85e-3dec0cb7595c\") " pod="openshift-operators/observability-operator-59bdc8b94-5dhfb" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.834567 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m5q7\" (UniqueName: \"kubernetes.io/projected/03ed7c21-b695-42b0-a85e-3dec0cb7595c-kube-api-access-7m5q7\") pod \"observability-operator-59bdc8b94-5dhfb\" (UID: \"03ed7c21-b695-42b0-a85e-3dec0cb7595c\") " pod="openshift-operators/observability-operator-59bdc8b94-5dhfb" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.865986 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-kb9qw"] Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.866880 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-kb9qw" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.871335 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-cvn8p" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.883550 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-kb9qw"] Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.917535 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a0b53be-4b28-4554-85bd-ddb9f580423e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-kb9qw\" (UID: \"5a0b53be-4b28-4554-85bd-ddb9f580423e\") " pod="openshift-operators/perses-operator-5bf474d74f-kb9qw" Feb 16 00:18:58 crc kubenswrapper[4698]: I0216 00:18:58.917734 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7lsr\" (UniqueName: \"kubernetes.io/projected/5a0b53be-4b28-4554-85bd-ddb9f580423e-kube-api-access-r7lsr\") pod \"perses-operator-5bf474d74f-kb9qw\" (UID: \"5a0b53be-4b28-4554-85bd-ddb9f580423e\") " pod="openshift-operators/perses-operator-5bf474d74f-kb9qw" Feb 16 00:18:59 crc kubenswrapper[4698]: I0216 00:18:59.015169 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-qkpw2"] Feb 16 00:18:59 crc kubenswrapper[4698]: I0216 00:18:59.018934 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7lsr\" (UniqueName: \"kubernetes.io/projected/5a0b53be-4b28-4554-85bd-ddb9f580423e-kube-api-access-r7lsr\") pod \"perses-operator-5bf474d74f-kb9qw\" (UID: \"5a0b53be-4b28-4554-85bd-ddb9f580423e\") " pod="openshift-operators/perses-operator-5bf474d74f-kb9qw" Feb 16 00:18:59 crc kubenswrapper[4698]: I0216 00:18:59.019007 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a0b53be-4b28-4554-85bd-ddb9f580423e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-kb9qw\" (UID: \"5a0b53be-4b28-4554-85bd-ddb9f580423e\") " pod="openshift-operators/perses-operator-5bf474d74f-kb9qw" Feb 16 00:18:59 crc kubenswrapper[4698]: I0216 00:18:59.019893 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a0b53be-4b28-4554-85bd-ddb9f580423e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-kb9qw\" (UID: \"5a0b53be-4b28-4554-85bd-ddb9f580423e\") " pod="openshift-operators/perses-operator-5bf474d74f-kb9qw" Feb 16 00:18:59 crc kubenswrapper[4698]: I0216 00:18:59.040773 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7lsr\" (UniqueName: \"kubernetes.io/projected/5a0b53be-4b28-4554-85bd-ddb9f580423e-kube-api-access-r7lsr\") pod \"perses-operator-5bf474d74f-kb9qw\" (UID: \"5a0b53be-4b28-4554-85bd-ddb9f580423e\") " pod="openshift-operators/perses-operator-5bf474d74f-kb9qw" Feb 16 00:18:59 crc kubenswrapper[4698]: I0216 00:18:59.045373 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5dhfb" Feb 16 00:18:59 crc kubenswrapper[4698]: W0216 00:18:59.062042 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d84ce9c_7712_4137_8b1e_d5c2ce3b413b.slice/crio-5bf0d6e1cff019151e9bef7e42649f651c4945a6634d49c8417c22513654db46 WatchSource:0}: Error finding container 5bf0d6e1cff019151e9bef7e42649f651c4945a6634d49c8417c22513654db46: Status 404 returned error can't find the container with id 5bf0d6e1cff019151e9bef7e42649f651c4945a6634d49c8417c22513654db46 Feb 16 00:18:59 crc kubenswrapper[4698]: I0216 00:18:59.119321 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-qkdb5"] Feb 16 00:18:59 crc kubenswrapper[4698]: W0216 00:18:59.137080 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dcc2d08_cb5c_43ba_b568_992bfcbf9ed4.slice/crio-943dd4c4b19dc56dc1fd427e6cc1c465a57057efeb67ae68d3b7b35063072a30 WatchSource:0}: Error finding container 943dd4c4b19dc56dc1fd427e6cc1c465a57057efeb67ae68d3b7b35063072a30: Status 404 returned error can't find the container with id 943dd4c4b19dc56dc1fd427e6cc1c465a57057efeb67ae68d3b7b35063072a30 Feb 16 00:18:59 crc kubenswrapper[4698]: I0216 00:18:59.162796 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-7rnfm"] Feb 16 00:18:59 crc kubenswrapper[4698]: I0216 00:18:59.197270 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-kb9qw" Feb 16 00:18:59 crc kubenswrapper[4698]: I0216 00:18:59.383369 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-5dhfb"] Feb 16 00:18:59 crc kubenswrapper[4698]: W0216 00:18:59.402603 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03ed7c21_b695_42b0_a85e_3dec0cb7595c.slice/crio-dbf5f2dfd1f268ada34e80b5c51f5a6ed2c175799750de98bc836d9c330c0967 WatchSource:0}: Error finding container dbf5f2dfd1f268ada34e80b5c51f5a6ed2c175799750de98bc836d9c330c0967: Status 404 returned error can't find the container with id dbf5f2dfd1f268ada34e80b5c51f5a6ed2c175799750de98bc836d9c330c0967 Feb 16 00:18:59 crc kubenswrapper[4698]: I0216 00:18:59.477818 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-kb9qw"] Feb 16 00:18:59 crc kubenswrapper[4698]: W0216 00:18:59.490878 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a0b53be_4b28_4554_85bd_ddb9f580423e.slice/crio-709d0bd33e238d9a09c68fce933406f017dd795ced27be03432cecacfc0a6888 WatchSource:0}: Error finding container 709d0bd33e238d9a09c68fce933406f017dd795ced27be03432cecacfc0a6888: Status 404 returned error can't find the container with id 709d0bd33e238d9a09c68fce933406f017dd795ced27be03432cecacfc0a6888 Feb 16 00:18:59 crc kubenswrapper[4698]: I0216 00:18:59.840752 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-7rnfm" event={"ID":"f8612dc6-5549-459e-8e2d-16851e88463c","Type":"ContainerStarted","Data":"6ee0a4d64b0eb69b389a72ec00e7494be908cfb474e48044184bcad5df55b8c8"} Feb 16 00:18:59 crc kubenswrapper[4698]: I0216 00:18:59.842146 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-5dhfb" event={"ID":"03ed7c21-b695-42b0-a85e-3dec0cb7595c","Type":"ContainerStarted","Data":"dbf5f2dfd1f268ada34e80b5c51f5a6ed2c175799750de98bc836d9c330c0967"} Feb 16 00:18:59 crc kubenswrapper[4698]: I0216 00:18:59.846605 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-qkdb5" event={"ID":"6dcc2d08-cb5c-43ba-b568-992bfcbf9ed4","Type":"ContainerStarted","Data":"943dd4c4b19dc56dc1fd427e6cc1c465a57057efeb67ae68d3b7b35063072a30"} Feb 16 00:18:59 crc kubenswrapper[4698]: I0216 00:18:59.849221 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qkpw2" event={"ID":"8d84ce9c-7712-4137-8b1e-d5c2ce3b413b","Type":"ContainerStarted","Data":"5bf0d6e1cff019151e9bef7e42649f651c4945a6634d49c8417c22513654db46"} Feb 16 00:18:59 crc kubenswrapper[4698]: I0216 00:18:59.850260 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-kb9qw" event={"ID":"5a0b53be-4b28-4554-85bd-ddb9f580423e","Type":"ContainerStarted","Data":"709d0bd33e238d9a09c68fce933406f017dd795ced27be03432cecacfc0a6888"} Feb 16 00:19:05 crc kubenswrapper[4698]: I0216 00:19:05.240524 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-57cf495d44-jpvl6"] Feb 16 00:19:05 crc kubenswrapper[4698]: I0216 00:19:05.242179 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-57cf495d44-jpvl6" Feb 16 00:19:05 crc kubenswrapper[4698]: I0216 00:19:05.245049 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Feb 16 00:19:05 crc kubenswrapper[4698]: I0216 00:19:05.245399 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Feb 16 00:19:05 crc kubenswrapper[4698]: I0216 00:19:05.245641 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Feb 16 00:19:05 crc kubenswrapper[4698]: I0216 00:19:05.245824 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-xscs7" Feb 16 00:19:05 crc kubenswrapper[4698]: I0216 00:19:05.255713 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-57cf495d44-jpvl6"] Feb 16 00:19:05 crc kubenswrapper[4698]: I0216 00:19:05.342798 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3bf58535-eb66-4d35-8cb2-515a6607c2cb-apiservice-cert\") pod \"elastic-operator-57cf495d44-jpvl6\" (UID: \"3bf58535-eb66-4d35-8cb2-515a6607c2cb\") " pod="service-telemetry/elastic-operator-57cf495d44-jpvl6" Feb 16 00:19:05 crc kubenswrapper[4698]: I0216 00:19:05.342874 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc6d5\" (UniqueName: \"kubernetes.io/projected/3bf58535-eb66-4d35-8cb2-515a6607c2cb-kube-api-access-vc6d5\") pod \"elastic-operator-57cf495d44-jpvl6\" (UID: \"3bf58535-eb66-4d35-8cb2-515a6607c2cb\") " pod="service-telemetry/elastic-operator-57cf495d44-jpvl6" Feb 16 00:19:05 crc kubenswrapper[4698]: I0216 00:19:05.342919 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3bf58535-eb66-4d35-8cb2-515a6607c2cb-webhook-cert\") pod \"elastic-operator-57cf495d44-jpvl6\" (UID: \"3bf58535-eb66-4d35-8cb2-515a6607c2cb\") " pod="service-telemetry/elastic-operator-57cf495d44-jpvl6" Feb 16 00:19:05 crc kubenswrapper[4698]: I0216 00:19:05.444355 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3bf58535-eb66-4d35-8cb2-515a6607c2cb-apiservice-cert\") pod \"elastic-operator-57cf495d44-jpvl6\" (UID: \"3bf58535-eb66-4d35-8cb2-515a6607c2cb\") " pod="service-telemetry/elastic-operator-57cf495d44-jpvl6" Feb 16 00:19:05 crc kubenswrapper[4698]: I0216 00:19:05.444441 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc6d5\" (UniqueName: \"kubernetes.io/projected/3bf58535-eb66-4d35-8cb2-515a6607c2cb-kube-api-access-vc6d5\") pod \"elastic-operator-57cf495d44-jpvl6\" (UID: \"3bf58535-eb66-4d35-8cb2-515a6607c2cb\") " pod="service-telemetry/elastic-operator-57cf495d44-jpvl6" Feb 16 00:19:05 crc kubenswrapper[4698]: I0216 00:19:05.444477 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3bf58535-eb66-4d35-8cb2-515a6607c2cb-webhook-cert\") pod \"elastic-operator-57cf495d44-jpvl6\" (UID: \"3bf58535-eb66-4d35-8cb2-515a6607c2cb\") " pod="service-telemetry/elastic-operator-57cf495d44-jpvl6" Feb 16 00:19:05 crc kubenswrapper[4698]: I0216 00:19:05.451525 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3bf58535-eb66-4d35-8cb2-515a6607c2cb-apiservice-cert\") pod \"elastic-operator-57cf495d44-jpvl6\" (UID: \"3bf58535-eb66-4d35-8cb2-515a6607c2cb\") " pod="service-telemetry/elastic-operator-57cf495d44-jpvl6" Feb 16 00:19:05 crc kubenswrapper[4698]: I0216 00:19:05.458398 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3bf58535-eb66-4d35-8cb2-515a6607c2cb-webhook-cert\") pod \"elastic-operator-57cf495d44-jpvl6\" (UID: \"3bf58535-eb66-4d35-8cb2-515a6607c2cb\") " pod="service-telemetry/elastic-operator-57cf495d44-jpvl6" Feb 16 00:19:05 crc kubenswrapper[4698]: I0216 00:19:05.467573 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc6d5\" (UniqueName: \"kubernetes.io/projected/3bf58535-eb66-4d35-8cb2-515a6607c2cb-kube-api-access-vc6d5\") pod \"elastic-operator-57cf495d44-jpvl6\" (UID: \"3bf58535-eb66-4d35-8cb2-515a6607c2cb\") " pod="service-telemetry/elastic-operator-57cf495d44-jpvl6" Feb 16 00:19:05 crc kubenswrapper[4698]: I0216 00:19:05.566065 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-57cf495d44-jpvl6" Feb 16 00:19:08 crc kubenswrapper[4698]: I0216 00:19:08.661781 4698 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 00:19:09 crc kubenswrapper[4698]: I0216 00:19:09.320105 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-x5hbq"] Feb 16 00:19:09 crc kubenswrapper[4698]: I0216 00:19:09.321204 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-x5hbq" Feb 16 00:19:09 crc kubenswrapper[4698]: I0216 00:19:09.325301 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-v7bgq" Feb 16 00:19:09 crc kubenswrapper[4698]: I0216 00:19:09.345859 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-x5hbq"] Feb 16 00:19:09 crc kubenswrapper[4698]: I0216 00:19:09.403061 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsmb9\" (UniqueName: \"kubernetes.io/projected/2bdcd000-0cae-4778-af41-440d53108488-kube-api-access-jsmb9\") pod \"interconnect-operator-5bb49f789d-x5hbq\" (UID: \"2bdcd000-0cae-4778-af41-440d53108488\") " pod="service-telemetry/interconnect-operator-5bb49f789d-x5hbq" Feb 16 00:19:09 crc kubenswrapper[4698]: I0216 00:19:09.504896 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsmb9\" (UniqueName: \"kubernetes.io/projected/2bdcd000-0cae-4778-af41-440d53108488-kube-api-access-jsmb9\") pod \"interconnect-operator-5bb49f789d-x5hbq\" (UID: \"2bdcd000-0cae-4778-af41-440d53108488\") " pod="service-telemetry/interconnect-operator-5bb49f789d-x5hbq" Feb 16 00:19:09 crc kubenswrapper[4698]: I0216 00:19:09.539665 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsmb9\" (UniqueName: \"kubernetes.io/projected/2bdcd000-0cae-4778-af41-440d53108488-kube-api-access-jsmb9\") pod \"interconnect-operator-5bb49f789d-x5hbq\" (UID: \"2bdcd000-0cae-4778-af41-440d53108488\") " pod="service-telemetry/interconnect-operator-5bb49f789d-x5hbq" Feb 16 00:19:09 crc kubenswrapper[4698]: I0216 00:19:09.645647 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-x5hbq" Feb 16 00:19:14 crc kubenswrapper[4698]: E0216 00:19:14.589454 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" Feb 16 00:19:14 crc kubenswrapper[4698]: E0216 00:19:14.590100 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator --watch-referenced-objects-in-all-namespaces=true --disable-unmanaged-prometheus-configuration=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hng4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-68bc856cb9-qkpw2_openshift-operators(8d84ce9c-7712-4137-8b1e-d5c2ce3b413b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 00:19:14 crc kubenswrapper[4698]: E0216 00:19:14.591724 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qkpw2" podUID="8d84ce9c-7712-4137-8b1e-d5c2ce3b413b" Feb 16 00:19:14 crc kubenswrapper[4698]: E0216 00:19:14.990847 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qkpw2" podUID="8d84ce9c-7712-4137-8b1e-d5c2ce3b413b" Feb 16 00:19:15 crc kubenswrapper[4698]: I0216 00:19:15.201492 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-57cf495d44-jpvl6"] Feb 16 00:19:15 crc kubenswrapper[4698]: I0216 00:19:15.293452 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-x5hbq"] Feb 16 00:19:15 crc kubenswrapper[4698]: W0216 00:19:15.366342 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bdcd000_0cae_4778_af41_440d53108488.slice/crio-487326c0acbb534e807af6ccca77204115ee5d54ccf136914788d0a537b78f09 WatchSource:0}: Error finding container 487326c0acbb534e807af6ccca77204115ee5d54ccf136914788d0a537b78f09: Status 404 returned error can't find the container with id 487326c0acbb534e807af6ccca77204115ee5d54ccf136914788d0a537b78f09 Feb 16 00:19:15 crc kubenswrapper[4698]: I0216 00:19:15.998098 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-5dhfb" event={"ID":"03ed7c21-b695-42b0-a85e-3dec0cb7595c","Type":"ContainerStarted","Data":"c1687123600564cedfb7ab70f03bfe6cd03261b5e332e864c7df7e6bc1e1dce6"} Feb 16 00:19:15 crc kubenswrapper[4698]: I0216 00:19:15.998420 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-5dhfb" Feb 16 00:19:15 crc kubenswrapper[4698]: I0216 00:19:15.999844 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-x5hbq" event={"ID":"2bdcd000-0cae-4778-af41-440d53108488","Type":"ContainerStarted","Data":"487326c0acbb534e807af6ccca77204115ee5d54ccf136914788d0a537b78f09"} Feb 16 00:19:16 crc kubenswrapper[4698]: I0216 00:19:16.001660 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-qkdb5" event={"ID":"6dcc2d08-cb5c-43ba-b568-992bfcbf9ed4","Type":"ContainerStarted","Data":"ea362a580b1fd2414f5e7f8680533c5d9a79d1ac8ff176240d66355d7800e58d"} Feb 16 00:19:16 crc kubenswrapper[4698]: I0216 00:19:16.010985 4698 generic.go:334] "Generic (PLEG): container finished" podID="31482fcb-ffd3-40fe-a5fc-5b21d6b522ce" containerID="3b4ccdb4ad6e6318b2d0c7cec2800e0fde6bb883bc6012bba5f32bf1d8907d4c" exitCode=0 Feb 16 00:19:16 crc kubenswrapper[4698]: I0216 00:19:16.011090 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk" event={"ID":"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce","Type":"ContainerDied","Data":"3b4ccdb4ad6e6318b2d0c7cec2800e0fde6bb883bc6012bba5f32bf1d8907d4c"} Feb 16 00:19:16 crc kubenswrapper[4698]: I0216 00:19:16.013767 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-57cf495d44-jpvl6" event={"ID":"3bf58535-eb66-4d35-8cb2-515a6607c2cb","Type":"ContainerStarted","Data":"dd507f9b32563eb7912fb300d9b52183686746c8eedf6096405aeb672bcf6cf3"} Feb 16 00:19:16 crc kubenswrapper[4698]: I0216 00:19:16.016412 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-kb9qw" event={"ID":"5a0b53be-4b28-4554-85bd-ddb9f580423e","Type":"ContainerStarted","Data":"fd54dc8b91382aca2f9eb4ee4f5487cd45c001fa100c2c573784d135aee2f3d2"} Feb 16 00:19:16 crc kubenswrapper[4698]: I0216 00:19:16.016873 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-kb9qw" Feb 16 00:19:16 crc kubenswrapper[4698]: I0216 00:19:16.046239 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-7rnfm" event={"ID":"f8612dc6-5549-459e-8e2d-16851e88463c","Type":"ContainerStarted","Data":"9d6807c730ac1e1b9ae84efc5cfd82ba07ab9fdcff3c40cb27fbf7b7c9c74031"} Feb 16 00:19:16 crc kubenswrapper[4698]: I0216 00:19:16.048481 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-5dhfb" podStartSLOduration=2.583628801 podStartE2EDuration="18.048457967s" podCreationTimestamp="2026-02-16 00:18:58 +0000 UTC" firstStartedPulling="2026-02-16 00:18:59.41790114 +0000 UTC m=+749.075799902" lastFinishedPulling="2026-02-16 00:19:14.882730306 +0000 UTC m=+764.540629068" observedRunningTime="2026-02-16 00:19:16.042397899 +0000 UTC m=+765.700296671" watchObservedRunningTime="2026-02-16 00:19:16.048457967 +0000 UTC m=+765.706356729" Feb 16 00:19:16 crc kubenswrapper[4698]: I0216 00:19:16.053740 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-5dhfb" Feb 16 00:19:16 crc kubenswrapper[4698]: I0216 00:19:16.093556 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-kb9qw" podStartSLOduration=2.7000536569999998 podStartE2EDuration="18.093533647s" podCreationTimestamp="2026-02-16 00:18:58 +0000 UTC" firstStartedPulling="2026-02-16 00:18:59.492930081 +0000 UTC m=+749.150828843" lastFinishedPulling="2026-02-16 00:19:14.886410071 +0000 UTC m=+764.544308833" observedRunningTime="2026-02-16 00:19:16.087950274 +0000 UTC m=+765.745849056" watchObservedRunningTime="2026-02-16 00:19:16.093533647 +0000 UTC m=+765.751432409" Feb 16 00:19:16 crc kubenswrapper[4698]: I0216 00:19:16.178837 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-qkdb5" podStartSLOduration=2.434367586 podStartE2EDuration="18.178812905s" podCreationTimestamp="2026-02-16 00:18:58 +0000 UTC" firstStartedPulling="2026-02-16 00:18:59.140388663 +0000 UTC m=+748.798287425" lastFinishedPulling="2026-02-16 00:19:14.884833972 +0000 UTC m=+764.542732744" observedRunningTime="2026-02-16 00:19:16.148583176 +0000 UTC m=+765.806481938" watchObservedRunningTime="2026-02-16 00:19:16.178812905 +0000 UTC m=+765.836711667" Feb 16 00:19:16 crc kubenswrapper[4698]: I0216 00:19:16.179662 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-676f96946c-7rnfm" podStartSLOduration=2.625964196 podStartE2EDuration="18.179653211s" podCreationTimestamp="2026-02-16 00:18:58 +0000 UTC" firstStartedPulling="2026-02-16 00:18:59.232113331 +0000 UTC m=+748.890012093" lastFinishedPulling="2026-02-16 00:19:14.785802346 +0000 UTC m=+764.443701108" observedRunningTime="2026-02-16 00:19:16.172965774 +0000 UTC m=+765.830864556" watchObservedRunningTime="2026-02-16 00:19:16.179653211 +0000 UTC m=+765.837551973" Feb 16 00:19:17 crc kubenswrapper[4698]: I0216 00:19:17.065550 4698 generic.go:334] "Generic (PLEG): container finished" podID="31482fcb-ffd3-40fe-a5fc-5b21d6b522ce" containerID="89fb8b38a1819cd19aa1349766fd6f50ac8aeac8b835ea56970c4d20009c0d86" exitCode=0 Feb 16 00:19:17 crc kubenswrapper[4698]: I0216 00:19:17.066470 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk" event={"ID":"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce","Type":"ContainerDied","Data":"89fb8b38a1819cd19aa1349766fd6f50ac8aeac8b835ea56970c4d20009c0d86"} Feb 16 00:19:18 crc kubenswrapper[4698]: I0216 00:19:18.737761 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f48p4"] Feb 16 00:19:18 crc kubenswrapper[4698]: I0216 00:19:18.739366 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f48p4" Feb 16 00:19:18 crc kubenswrapper[4698]: I0216 00:19:18.754535 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f48p4"] Feb 16 00:19:18 crc kubenswrapper[4698]: I0216 00:19:18.791383 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1003885-07f6-45cb-b3f1-a45aff5c0c11-catalog-content\") pod \"redhat-operators-f48p4\" (UID: \"e1003885-07f6-45cb-b3f1-a45aff5c0c11\") " pod="openshift-marketplace/redhat-operators-f48p4" Feb 16 00:19:18 crc kubenswrapper[4698]: I0216 00:19:18.791514 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff9gb\" (UniqueName: \"kubernetes.io/projected/e1003885-07f6-45cb-b3f1-a45aff5c0c11-kube-api-access-ff9gb\") pod \"redhat-operators-f48p4\" (UID: \"e1003885-07f6-45cb-b3f1-a45aff5c0c11\") " pod="openshift-marketplace/redhat-operators-f48p4" Feb 16 00:19:18 crc kubenswrapper[4698]: I0216 00:19:18.791563 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1003885-07f6-45cb-b3f1-a45aff5c0c11-utilities\") pod \"redhat-operators-f48p4\" (UID: \"e1003885-07f6-45cb-b3f1-a45aff5c0c11\") " pod="openshift-marketplace/redhat-operators-f48p4" Feb 16 00:19:18 crc kubenswrapper[4698]: I0216 00:19:18.892567 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff9gb\" (UniqueName: \"kubernetes.io/projected/e1003885-07f6-45cb-b3f1-a45aff5c0c11-kube-api-access-ff9gb\") pod \"redhat-operators-f48p4\" (UID: \"e1003885-07f6-45cb-b3f1-a45aff5c0c11\") " pod="openshift-marketplace/redhat-operators-f48p4" Feb 16 00:19:18 crc kubenswrapper[4698]: I0216 00:19:18.892676 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1003885-07f6-45cb-b3f1-a45aff5c0c11-utilities\") pod \"redhat-operators-f48p4\" (UID: \"e1003885-07f6-45cb-b3f1-a45aff5c0c11\") " pod="openshift-marketplace/redhat-operators-f48p4" Feb 16 00:19:18 crc kubenswrapper[4698]: I0216 00:19:18.892715 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1003885-07f6-45cb-b3f1-a45aff5c0c11-catalog-content\") pod \"redhat-operators-f48p4\" (UID: \"e1003885-07f6-45cb-b3f1-a45aff5c0c11\") " pod="openshift-marketplace/redhat-operators-f48p4" Feb 16 00:19:18 crc kubenswrapper[4698]: I0216 00:19:18.893500 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1003885-07f6-45cb-b3f1-a45aff5c0c11-catalog-content\") pod \"redhat-operators-f48p4\" (UID: \"e1003885-07f6-45cb-b3f1-a45aff5c0c11\") " pod="openshift-marketplace/redhat-operators-f48p4" Feb 16 00:19:18 crc kubenswrapper[4698]: I0216 00:19:18.893519 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1003885-07f6-45cb-b3f1-a45aff5c0c11-utilities\") pod \"redhat-operators-f48p4\" (UID: \"e1003885-07f6-45cb-b3f1-a45aff5c0c11\") " pod="openshift-marketplace/redhat-operators-f48p4" Feb 16 00:19:18 crc kubenswrapper[4698]: I0216 00:19:18.926781 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff9gb\" (UniqueName: \"kubernetes.io/projected/e1003885-07f6-45cb-b3f1-a45aff5c0c11-kube-api-access-ff9gb\") pod \"redhat-operators-f48p4\" (UID: \"e1003885-07f6-45cb-b3f1-a45aff5c0c11\") " pod="openshift-marketplace/redhat-operators-f48p4" Feb 16 00:19:19 crc kubenswrapper[4698]: I0216 00:19:19.070955 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f48p4" Feb 16 00:19:19 crc kubenswrapper[4698]: I0216 00:19:19.536830 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk" Feb 16 00:19:19 crc kubenswrapper[4698]: I0216 00:19:19.604885 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98mqh\" (UniqueName: \"kubernetes.io/projected/31482fcb-ffd3-40fe-a5fc-5b21d6b522ce-kube-api-access-98mqh\") pod \"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce\" (UID: \"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce\") " Feb 16 00:19:19 crc kubenswrapper[4698]: I0216 00:19:19.605351 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31482fcb-ffd3-40fe-a5fc-5b21d6b522ce-bundle\") pod \"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce\" (UID: \"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce\") " Feb 16 00:19:19 crc kubenswrapper[4698]: I0216 00:19:19.605484 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31482fcb-ffd3-40fe-a5fc-5b21d6b522ce-util\") pod \"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce\" (UID: \"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce\") " Feb 16 00:19:19 crc kubenswrapper[4698]: I0216 00:19:19.606703 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31482fcb-ffd3-40fe-a5fc-5b21d6b522ce-bundle" (OuterVolumeSpecName: "bundle") pod "31482fcb-ffd3-40fe-a5fc-5b21d6b522ce" (UID: "31482fcb-ffd3-40fe-a5fc-5b21d6b522ce"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:19:19 crc kubenswrapper[4698]: I0216 00:19:19.609859 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31482fcb-ffd3-40fe-a5fc-5b21d6b522ce-kube-api-access-98mqh" (OuterVolumeSpecName: "kube-api-access-98mqh") pod "31482fcb-ffd3-40fe-a5fc-5b21d6b522ce" (UID: "31482fcb-ffd3-40fe-a5fc-5b21d6b522ce"). InnerVolumeSpecName "kube-api-access-98mqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:19:19 crc kubenswrapper[4698]: I0216 00:19:19.624502 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31482fcb-ffd3-40fe-a5fc-5b21d6b522ce-util" (OuterVolumeSpecName: "util") pod "31482fcb-ffd3-40fe-a5fc-5b21d6b522ce" (UID: "31482fcb-ffd3-40fe-a5fc-5b21d6b522ce"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:19:19 crc kubenswrapper[4698]: I0216 00:19:19.706921 4698 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31482fcb-ffd3-40fe-a5fc-5b21d6b522ce-util\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:19 crc kubenswrapper[4698]: I0216 00:19:19.706971 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98mqh\" (UniqueName: \"kubernetes.io/projected/31482fcb-ffd3-40fe-a5fc-5b21d6b522ce-kube-api-access-98mqh\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:19 crc kubenswrapper[4698]: I0216 00:19:19.706985 4698 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31482fcb-ffd3-40fe-a5fc-5b21d6b522ce-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:20 crc kubenswrapper[4698]: I0216 00:19:20.095258 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk" event={"ID":"31482fcb-ffd3-40fe-a5fc-5b21d6b522ce","Type":"ContainerDied","Data":"3bf041e1a026d4a7b05687ff7157d0cc66565aa35acb004116220ebb4b621c96"} Feb 16 00:19:20 crc kubenswrapper[4698]: I0216 00:19:20.095304 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bf041e1a026d4a7b05687ff7157d0cc66565aa35acb004116220ebb4b621c96" Feb 16 00:19:20 crc kubenswrapper[4698]: I0216 00:19:20.095305 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk" Feb 16 00:19:25 crc kubenswrapper[4698]: I0216 00:19:25.313311 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f48p4"] Feb 16 00:19:25 crc kubenswrapper[4698]: W0216 00:19:25.325446 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1003885_07f6_45cb_b3f1_a45aff5c0c11.slice/crio-7eb2d96c38ed6fb34c47ed3537cec8e681de4d41649e900aa1f39a7841f85a01 WatchSource:0}: Error finding container 7eb2d96c38ed6fb34c47ed3537cec8e681de4d41649e900aa1f39a7841f85a01: Status 404 returned error can't find the container with id 7eb2d96c38ed6fb34c47ed3537cec8e681de4d41649e900aa1f39a7841f85a01 Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.143387 4698 generic.go:334] "Generic (PLEG): container finished" podID="e1003885-07f6-45cb-b3f1-a45aff5c0c11" containerID="ab16e55170b9736a51581017bfcb07686dcfbae28d03c2dae376c0e5bc8c7102" exitCode=0 Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.143650 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f48p4" event={"ID":"e1003885-07f6-45cb-b3f1-a45aff5c0c11","Type":"ContainerDied","Data":"ab16e55170b9736a51581017bfcb07686dcfbae28d03c2dae376c0e5bc8c7102"} Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.143679 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f48p4" event={"ID":"e1003885-07f6-45cb-b3f1-a45aff5c0c11","Type":"ContainerStarted","Data":"7eb2d96c38ed6fb34c47ed3537cec8e681de4d41649e900aa1f39a7841f85a01"} Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.148012 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-x5hbq" event={"ID":"2bdcd000-0cae-4778-af41-440d53108488","Type":"ContainerStarted","Data":"f3705ac849655d75118d833de2814b49ee5e49c521ed184995285ea7cd0dafc4"} Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.150849 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-57cf495d44-jpvl6" event={"ID":"3bf58535-eb66-4d35-8cb2-515a6607c2cb","Type":"ContainerStarted","Data":"ce30b92bdfd61cee7bd3ce4a99a4c4548d5d18ba946185974d70af136e6da9a1"} Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.204914 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-x5hbq" podStartSLOduration=7.472486145 podStartE2EDuration="17.204897595s" podCreationTimestamp="2026-02-16 00:19:09 +0000 UTC" firstStartedPulling="2026-02-16 00:19:15.368976307 +0000 UTC m=+765.026875069" lastFinishedPulling="2026-02-16 00:19:25.101387757 +0000 UTC m=+774.759286519" observedRunningTime="2026-02-16 00:19:26.203478512 +0000 UTC m=+775.861377274" watchObservedRunningTime="2026-02-16 00:19:26.204897595 +0000 UTC m=+775.862796377" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.233767 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-57cf495d44-jpvl6" podStartSLOduration=11.405474504 podStartE2EDuration="21.233747431s" podCreationTimestamp="2026-02-16 00:19:05 +0000 UTC" firstStartedPulling="2026-02-16 00:19:15.242572561 +0000 UTC m=+764.900471323" lastFinishedPulling="2026-02-16 00:19:25.070845478 +0000 UTC m=+774.728744250" observedRunningTime="2026-02-16 00:19:26.227799966 +0000 UTC m=+775.885698738" watchObservedRunningTime="2026-02-16 00:19:26.233747431 +0000 UTC m=+775.891646203" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.500061 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 16 00:19:26 crc kubenswrapper[4698]: E0216 00:19:26.501038 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31482fcb-ffd3-40fe-a5fc-5b21d6b522ce" containerName="extract" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.501105 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="31482fcb-ffd3-40fe-a5fc-5b21d6b522ce" containerName="extract" Feb 16 00:19:26 crc kubenswrapper[4698]: E0216 00:19:26.501158 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31482fcb-ffd3-40fe-a5fc-5b21d6b522ce" containerName="pull" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.501264 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="31482fcb-ffd3-40fe-a5fc-5b21d6b522ce" containerName="pull" Feb 16 00:19:26 crc kubenswrapper[4698]: E0216 00:19:26.501330 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31482fcb-ffd3-40fe-a5fc-5b21d6b522ce" containerName="util" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.501377 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="31482fcb-ffd3-40fe-a5fc-5b21d6b522ce" containerName="util" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.501522 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="31482fcb-ffd3-40fe-a5fc-5b21d6b522ce" containerName="extract" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.502426 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.504992 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.505153 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.505300 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.505583 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.505952 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.506265 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.506425 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-gm4w2" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.507961 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.509133 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.527718 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.616007 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d76004b3-8be8-40f4-be5e-e9a792bebce1-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.616267 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.616669 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/d76004b3-8be8-40f4-be5e-e9a792bebce1-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.616754 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.616836 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/d76004b3-8be8-40f4-be5e-e9a792bebce1-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.616927 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.617022 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/d76004b3-8be8-40f4-be5e-e9a792bebce1-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.617101 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.617182 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.617288 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.617369 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.617451 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.617533 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.617609 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.617711 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.719167 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/d76004b3-8be8-40f4-be5e-e9a792bebce1-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.719585 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.719810 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.719991 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.720129 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.719657 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/d76004b3-8be8-40f4-be5e-e9a792bebce1-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.720369 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.720527 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.720716 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.720872 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.721065 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.721245 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d76004b3-8be8-40f4-be5e-e9a792bebce1-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.721451 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.721604 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/d76004b3-8be8-40f4-be5e-e9a792bebce1-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.721799 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.721960 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/d76004b3-8be8-40f4-be5e-e9a792bebce1-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.722141 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.721868 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d76004b3-8be8-40f4-be5e-e9a792bebce1-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.720907 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.723007 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.721064 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.723387 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/d76004b3-8be8-40f4-be5e-e9a792bebce1-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.723699 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.726870 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.727337 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.728354 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/d76004b3-8be8-40f4-be5e-e9a792bebce1-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.735280 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.735296 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.735751 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.737682 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/d76004b3-8be8-40f4-be5e-e9a792bebce1-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"d76004b3-8be8-40f4-be5e-e9a792bebce1\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:26 crc kubenswrapper[4698]: I0216 00:19:26.820056 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:19:27 crc kubenswrapper[4698]: I0216 00:19:27.055314 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:19:27 crc kubenswrapper[4698]: I0216 00:19:27.055688 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:19:27 crc kubenswrapper[4698]: I0216 00:19:27.055748 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:19:27 crc kubenswrapper[4698]: I0216 00:19:27.056233 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"28455df6b45ac3d964cdd4d7f6adb7fb0a6e0a48a0dcb629da0d78838dbdbdad"} pod="openshift-machine-config-operator/machine-config-daemon-z56m2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 00:19:27 crc kubenswrapper[4698]: I0216 00:19:27.056301 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" containerID="cri-o://28455df6b45ac3d964cdd4d7f6adb7fb0a6e0a48a0dcb629da0d78838dbdbdad" gracePeriod=600 Feb 16 00:19:27 crc kubenswrapper[4698]: I0216 00:19:27.140467 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 16 00:19:27 crc kubenswrapper[4698]: W0216 00:19:27.157241 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd76004b3_8be8_40f4_be5e_e9a792bebce1.slice/crio-3b90626b6f7c9add4c3d2b2b7dd62407ee8e7d70bb638ce8982148220a2d06a9 WatchSource:0}: Error finding container 3b90626b6f7c9add4c3d2b2b7dd62407ee8e7d70bb638ce8982148220a2d06a9: Status 404 returned error can't find the container with id 3b90626b6f7c9add4c3d2b2b7dd62407ee8e7d70bb638ce8982148220a2d06a9 Feb 16 00:19:27 crc kubenswrapper[4698]: I0216 00:19:27.164879 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f48p4" event={"ID":"e1003885-07f6-45cb-b3f1-a45aff5c0c11","Type":"ContainerStarted","Data":"2f1741d0507c5e594f64731f7e9bd0d0f716e60b736855ee4107d8db0dd260e5"} Feb 16 00:19:28 crc kubenswrapper[4698]: I0216 00:19:28.177714 4698 generic.go:334] "Generic (PLEG): container finished" podID="7b351654-277f-4d0d-84f9-b003f934936c" containerID="28455df6b45ac3d964cdd4d7f6adb7fb0a6e0a48a0dcb629da0d78838dbdbdad" exitCode=0 Feb 16 00:19:28 crc kubenswrapper[4698]: I0216 00:19:28.177777 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" event={"ID":"7b351654-277f-4d0d-84f9-b003f934936c","Type":"ContainerDied","Data":"28455df6b45ac3d964cdd4d7f6adb7fb0a6e0a48a0dcb629da0d78838dbdbdad"} Feb 16 00:19:28 crc kubenswrapper[4698]: I0216 00:19:28.178065 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" event={"ID":"7b351654-277f-4d0d-84f9-b003f934936c","Type":"ContainerStarted","Data":"d6541b3cb76f710dedaef0b85b0e104e861ef72466cd38ea058959a35248ef97"} Feb 16 00:19:28 crc kubenswrapper[4698]: I0216 00:19:28.178088 4698 scope.go:117] "RemoveContainer" containerID="95b91d2cb7e56ab2acf12e0ef16910725a29cc735baa309c370a87fec7d9c648" Feb 16 00:19:28 crc kubenswrapper[4698]: I0216 00:19:28.180025 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d76004b3-8be8-40f4-be5e-e9a792bebce1","Type":"ContainerStarted","Data":"3b90626b6f7c9add4c3d2b2b7dd62407ee8e7d70bb638ce8982148220a2d06a9"} Feb 16 00:19:28 crc kubenswrapper[4698]: I0216 00:19:28.187293 4698 generic.go:334] "Generic (PLEG): container finished" podID="e1003885-07f6-45cb-b3f1-a45aff5c0c11" containerID="2f1741d0507c5e594f64731f7e9bd0d0f716e60b736855ee4107d8db0dd260e5" exitCode=0 Feb 16 00:19:28 crc kubenswrapper[4698]: I0216 00:19:28.187334 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f48p4" event={"ID":"e1003885-07f6-45cb-b3f1-a45aff5c0c11","Type":"ContainerDied","Data":"2f1741d0507c5e594f64731f7e9bd0d0f716e60b736855ee4107d8db0dd260e5"} Feb 16 00:19:29 crc kubenswrapper[4698]: I0216 00:19:29.199949 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f48p4" event={"ID":"e1003885-07f6-45cb-b3f1-a45aff5c0c11","Type":"ContainerStarted","Data":"f13b3588b0eff5b2fdeaabfcc0212de623e61ffdc7435d1ee5687b1f01079350"} Feb 16 00:19:29 crc kubenswrapper[4698]: I0216 00:19:29.201038 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-kb9qw" Feb 16 00:19:29 crc kubenswrapper[4698]: I0216 00:19:29.308986 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f48p4" podStartSLOduration=8.839007326 podStartE2EDuration="11.308964579s" podCreationTimestamp="2026-02-16 00:19:18 +0000 UTC" firstStartedPulling="2026-02-16 00:19:26.145898753 +0000 UTC m=+775.803797515" lastFinishedPulling="2026-02-16 00:19:28.615856006 +0000 UTC m=+778.273754768" observedRunningTime="2026-02-16 00:19:29.282502088 +0000 UTC m=+778.940400850" watchObservedRunningTime="2026-02-16 00:19:29.308964579 +0000 UTC m=+778.966863351" Feb 16 00:19:32 crc kubenswrapper[4698]: I0216 00:19:32.260249 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qkpw2" event={"ID":"8d84ce9c-7712-4137-8b1e-d5c2ce3b413b","Type":"ContainerStarted","Data":"d18293165e7f88fed0f7d052e7347f12072e2555c616c03d008e685806b5c134"} Feb 16 00:19:32 crc kubenswrapper[4698]: I0216 00:19:32.282159 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qkpw2" podStartSLOduration=1.546853359 podStartE2EDuration="34.282136146s" podCreationTimestamp="2026-02-16 00:18:58 +0000 UTC" firstStartedPulling="2026-02-16 00:18:59.06977071 +0000 UTC m=+748.727669472" lastFinishedPulling="2026-02-16 00:19:31.805053497 +0000 UTC m=+781.462952259" observedRunningTime="2026-02-16 00:19:32.278886956 +0000 UTC m=+781.936785718" watchObservedRunningTime="2026-02-16 00:19:32.282136146 +0000 UTC m=+781.940034908" Feb 16 00:19:38 crc kubenswrapper[4698]: I0216 00:19:38.295031 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-n4gm2"] Feb 16 00:19:38 crc kubenswrapper[4698]: I0216 00:19:38.296585 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-n4gm2" Feb 16 00:19:38 crc kubenswrapper[4698]: I0216 00:19:38.300947 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 16 00:19:38 crc kubenswrapper[4698]: I0216 00:19:38.301359 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 16 00:19:38 crc kubenswrapper[4698]: I0216 00:19:38.301469 4698 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-jnnp2" Feb 16 00:19:38 crc kubenswrapper[4698]: I0216 00:19:38.309410 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-n4gm2"] Feb 16 00:19:38 crc kubenswrapper[4698]: I0216 00:19:38.446201 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jqzr\" (UniqueName: \"kubernetes.io/projected/1dafa441-a232-4849-84a9-d35e021ba65f-kube-api-access-5jqzr\") pod \"cert-manager-operator-controller-manager-5586865c96-n4gm2\" (UID: \"1dafa441-a232-4849-84a9-d35e021ba65f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-n4gm2" Feb 16 00:19:38 crc kubenswrapper[4698]: I0216 00:19:38.446326 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1dafa441-a232-4849-84a9-d35e021ba65f-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-n4gm2\" (UID: \"1dafa441-a232-4849-84a9-d35e021ba65f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-n4gm2" Feb 16 00:19:38 crc kubenswrapper[4698]: I0216 00:19:38.547825 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1dafa441-a232-4849-84a9-d35e021ba65f-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-n4gm2\" (UID: \"1dafa441-a232-4849-84a9-d35e021ba65f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-n4gm2" Feb 16 00:19:38 crc kubenswrapper[4698]: I0216 00:19:38.547926 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jqzr\" (UniqueName: \"kubernetes.io/projected/1dafa441-a232-4849-84a9-d35e021ba65f-kube-api-access-5jqzr\") pod \"cert-manager-operator-controller-manager-5586865c96-n4gm2\" (UID: \"1dafa441-a232-4849-84a9-d35e021ba65f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-n4gm2" Feb 16 00:19:38 crc kubenswrapper[4698]: I0216 00:19:38.548653 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1dafa441-a232-4849-84a9-d35e021ba65f-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-n4gm2\" (UID: \"1dafa441-a232-4849-84a9-d35e021ba65f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-n4gm2" Feb 16 00:19:38 crc kubenswrapper[4698]: I0216 00:19:38.571481 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jqzr\" (UniqueName: \"kubernetes.io/projected/1dafa441-a232-4849-84a9-d35e021ba65f-kube-api-access-5jqzr\") pod \"cert-manager-operator-controller-manager-5586865c96-n4gm2\" (UID: \"1dafa441-a232-4849-84a9-d35e021ba65f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-n4gm2" Feb 16 00:19:38 crc kubenswrapper[4698]: I0216 00:19:38.631304 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-n4gm2" Feb 16 00:19:39 crc kubenswrapper[4698]: I0216 00:19:39.071727 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f48p4" Feb 16 00:19:39 crc kubenswrapper[4698]: I0216 00:19:39.071792 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f48p4" Feb 16 00:19:39 crc kubenswrapper[4698]: I0216 00:19:39.155591 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f48p4" Feb 16 00:19:39 crc kubenswrapper[4698]: I0216 00:19:39.391394 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f48p4" Feb 16 00:19:42 crc kubenswrapper[4698]: I0216 00:19:42.543412 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f48p4"] Feb 16 00:19:42 crc kubenswrapper[4698]: I0216 00:19:42.544264 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f48p4" podUID="e1003885-07f6-45cb-b3f1-a45aff5c0c11" containerName="registry-server" containerID="cri-o://f13b3588b0eff5b2fdeaabfcc0212de623e61ffdc7435d1ee5687b1f01079350" gracePeriod=2 Feb 16 00:19:44 crc kubenswrapper[4698]: I0216 00:19:44.347262 4698 generic.go:334] "Generic (PLEG): container finished" podID="e1003885-07f6-45cb-b3f1-a45aff5c0c11" containerID="f13b3588b0eff5b2fdeaabfcc0212de623e61ffdc7435d1ee5687b1f01079350" exitCode=0 Feb 16 00:19:44 crc kubenswrapper[4698]: I0216 00:19:44.347395 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f48p4" event={"ID":"e1003885-07f6-45cb-b3f1-a45aff5c0c11","Type":"ContainerDied","Data":"f13b3588b0eff5b2fdeaabfcc0212de623e61ffdc7435d1ee5687b1f01079350"} Feb 16 00:19:45 crc kubenswrapper[4698]: I0216 00:19:45.335912 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f48p4" Feb 16 00:19:45 crc kubenswrapper[4698]: I0216 00:19:45.372913 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f48p4" event={"ID":"e1003885-07f6-45cb-b3f1-a45aff5c0c11","Type":"ContainerDied","Data":"7eb2d96c38ed6fb34c47ed3537cec8e681de4d41649e900aa1f39a7841f85a01"} Feb 16 00:19:45 crc kubenswrapper[4698]: I0216 00:19:45.372982 4698 scope.go:117] "RemoveContainer" containerID="f13b3588b0eff5b2fdeaabfcc0212de623e61ffdc7435d1ee5687b1f01079350" Feb 16 00:19:45 crc kubenswrapper[4698]: I0216 00:19:45.373532 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f48p4" Feb 16 00:19:45 crc kubenswrapper[4698]: I0216 00:19:45.444553 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff9gb\" (UniqueName: \"kubernetes.io/projected/e1003885-07f6-45cb-b3f1-a45aff5c0c11-kube-api-access-ff9gb\") pod \"e1003885-07f6-45cb-b3f1-a45aff5c0c11\" (UID: \"e1003885-07f6-45cb-b3f1-a45aff5c0c11\") " Feb 16 00:19:45 crc kubenswrapper[4698]: I0216 00:19:45.444622 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1003885-07f6-45cb-b3f1-a45aff5c0c11-utilities\") pod \"e1003885-07f6-45cb-b3f1-a45aff5c0c11\" (UID: \"e1003885-07f6-45cb-b3f1-a45aff5c0c11\") " Feb 16 00:19:45 crc kubenswrapper[4698]: I0216 00:19:45.444664 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1003885-07f6-45cb-b3f1-a45aff5c0c11-catalog-content\") pod \"e1003885-07f6-45cb-b3f1-a45aff5c0c11\" (UID: \"e1003885-07f6-45cb-b3f1-a45aff5c0c11\") " Feb 16 00:19:45 crc kubenswrapper[4698]: I0216 00:19:45.445637 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1003885-07f6-45cb-b3f1-a45aff5c0c11-utilities" (OuterVolumeSpecName: "utilities") pod "e1003885-07f6-45cb-b3f1-a45aff5c0c11" (UID: "e1003885-07f6-45cb-b3f1-a45aff5c0c11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:19:45 crc kubenswrapper[4698]: I0216 00:19:45.457027 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1003885-07f6-45cb-b3f1-a45aff5c0c11-kube-api-access-ff9gb" (OuterVolumeSpecName: "kube-api-access-ff9gb") pod "e1003885-07f6-45cb-b3f1-a45aff5c0c11" (UID: "e1003885-07f6-45cb-b3f1-a45aff5c0c11"). InnerVolumeSpecName "kube-api-access-ff9gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:19:45 crc kubenswrapper[4698]: I0216 00:19:45.547169 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff9gb\" (UniqueName: \"kubernetes.io/projected/e1003885-07f6-45cb-b3f1-a45aff5c0c11-kube-api-access-ff9gb\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:45 crc kubenswrapper[4698]: I0216 00:19:45.547219 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1003885-07f6-45cb-b3f1-a45aff5c0c11-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:45 crc kubenswrapper[4698]: I0216 00:19:45.591706 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1003885-07f6-45cb-b3f1-a45aff5c0c11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1003885-07f6-45cb-b3f1-a45aff5c0c11" (UID: "e1003885-07f6-45cb-b3f1-a45aff5c0c11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:19:45 crc kubenswrapper[4698]: I0216 00:19:45.648909 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1003885-07f6-45cb-b3f1-a45aff5c0c11-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:45 crc kubenswrapper[4698]: I0216 00:19:45.712989 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f48p4"] Feb 16 00:19:45 crc kubenswrapper[4698]: I0216 00:19:45.725102 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f48p4"] Feb 16 00:19:46 crc kubenswrapper[4698]: I0216 00:19:46.165167 4698 scope.go:117] "RemoveContainer" containerID="2f1741d0507c5e594f64731f7e9bd0d0f716e60b736855ee4107d8db0dd260e5" Feb 16 00:19:46 crc kubenswrapper[4698]: I0216 00:19:46.207778 4698 scope.go:117] "RemoveContainer" containerID="ab16e55170b9736a51581017bfcb07686dcfbae28d03c2dae376c0e5bc8c7102" Feb 16 00:19:46 crc kubenswrapper[4698]: I0216 00:19:46.382017 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-n4gm2"] Feb 16 00:19:46 crc kubenswrapper[4698]: E0216 00:19:46.450930 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.connect.redhat.com/elastic/elasticsearch:7.17.20" Feb 16 00:19:46 crc kubenswrapper[4698]: E0216 00:19:46.451274 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:elastic-internal-init-filesystem,Image:registry.connect.redhat.com/elastic/elasticsearch:7.17.20,Command:[bash -c /mnt/elastic-internal/scripts/prepare-fs.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:HEADLESS_SERVICE_NAME,Value:elasticsearch-es-default,ValueFrom:nil,},EnvVar{Name:PROBE_PASSWORD_PATH,Value:/mnt/elastic-internal/pod-mounted-users/elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:PROBE_USERNAME,Value:elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:READINESS_PROBE_PROTOCOL,Value:https,ValueFrom:nil,},EnvVar{Name:NSS_SDB_USE_CACHE,Value:no,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:downward-api,ReadOnly:true,MountPath:/mnt/elastic-internal/downward-api,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-bin-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-bin-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config,ReadOnly:true,MountPath:/mnt/elastic-internal/elasticsearch-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-config-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-plugins-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-plugins-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-http-certificates,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/http-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-probe-user,ReadOnly:true,MountPath:/mnt/elastic-internal/pod-mounted-users,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-remote-certificate-authorities,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/transport-remote-certs/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-scripts,ReadOnly:true,MountPath:/mnt/elastic-internal/scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-transport-certificates,ReadOnly:true,MountPath:/mnt/elastic-internal/transport-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-unicast-hosts,ReadOnly:true,MountPath:/mnt/elastic-internal/unicast-hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-xpack-file-realm,ReadOnly:true,MountPath:/mnt/elastic-internal/xpack-file-realm,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-data,ReadOnly:false,MountPath:/usr/share/elasticsearch/data,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-logs,ReadOnly:false,MountPath:/usr/share/elasticsearch/logs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod elasticsearch-es-default-0_service-telemetry(d76004b3-8be8-40f4-be5e-e9a792bebce1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 00:19:46 crc kubenswrapper[4698]: E0216 00:19:46.452712 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="d76004b3-8be8-40f4-be5e-e9a792bebce1" Feb 16 00:19:47 crc kubenswrapper[4698]: I0216 00:19:47.241978 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1003885-07f6-45cb-b3f1-a45aff5c0c11" path="/var/lib/kubelet/pods/e1003885-07f6-45cb-b3f1-a45aff5c0c11/volumes" Feb 16 00:19:47 crc kubenswrapper[4698]: I0216 00:19:47.394689 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-n4gm2" event={"ID":"1dafa441-a232-4849-84a9-d35e021ba65f","Type":"ContainerStarted","Data":"d41cb5633217c5d64e2f0bf9508ae69900cd8101c93e2a43ff8bd53ce80c09c4"} Feb 16 00:19:47 crc kubenswrapper[4698]: E0216 00:19:47.397401 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="d76004b3-8be8-40f4-be5e-e9a792bebce1" Feb 16 00:19:47 crc kubenswrapper[4698]: I0216 00:19:47.611039 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 16 00:19:47 crc kubenswrapper[4698]: I0216 00:19:47.642161 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 16 00:19:48 crc kubenswrapper[4698]: E0216 00:19:48.412812 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="d76004b3-8be8-40f4-be5e-e9a792bebce1" Feb 16 00:19:49 crc kubenswrapper[4698]: I0216 00:19:49.420746 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-n4gm2" event={"ID":"1dafa441-a232-4849-84a9-d35e021ba65f","Type":"ContainerStarted","Data":"b43e6974a8b77ec1d19a5684bf87a256010845fb2955229da48cc736948d5320"} Feb 16 00:19:49 crc kubenswrapper[4698]: E0216 00:19:49.423307 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="d76004b3-8be8-40f4-be5e-e9a792bebce1" Feb 16 00:19:49 crc kubenswrapper[4698]: I0216 00:19:49.445201 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-n4gm2" podStartSLOduration=8.912369944 podStartE2EDuration="11.445176726s" podCreationTimestamp="2026-02-16 00:19:38 +0000 UTC" firstStartedPulling="2026-02-16 00:19:46.385888587 +0000 UTC m=+796.043787339" lastFinishedPulling="2026-02-16 00:19:48.918695359 +0000 UTC m=+798.576594121" observedRunningTime="2026-02-16 00:19:49.444835245 +0000 UTC m=+799.102734027" watchObservedRunningTime="2026-02-16 00:19:49.445176726 +0000 UTC m=+799.103075488" Feb 16 00:19:53 crc kubenswrapper[4698]: I0216 00:19:53.292162 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-2rgj5"] Feb 16 00:19:53 crc kubenswrapper[4698]: E0216 00:19:53.293154 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1003885-07f6-45cb-b3f1-a45aff5c0c11" containerName="extract-utilities" Feb 16 00:19:53 crc kubenswrapper[4698]: I0216 00:19:53.293172 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1003885-07f6-45cb-b3f1-a45aff5c0c11" containerName="extract-utilities" Feb 16 00:19:53 crc kubenswrapper[4698]: E0216 00:19:53.293198 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1003885-07f6-45cb-b3f1-a45aff5c0c11" containerName="extract-content" Feb 16 00:19:53 crc kubenswrapper[4698]: I0216 00:19:53.293207 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1003885-07f6-45cb-b3f1-a45aff5c0c11" containerName="extract-content" Feb 16 00:19:53 crc kubenswrapper[4698]: E0216 00:19:53.293219 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1003885-07f6-45cb-b3f1-a45aff5c0c11" containerName="registry-server" Feb 16 00:19:53 crc kubenswrapper[4698]: I0216 00:19:53.293229 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1003885-07f6-45cb-b3f1-a45aff5c0c11" containerName="registry-server" Feb 16 00:19:53 crc kubenswrapper[4698]: I0216 00:19:53.293370 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1003885-07f6-45cb-b3f1-a45aff5c0c11" containerName="registry-server" Feb 16 00:19:53 crc kubenswrapper[4698]: I0216 00:19:53.294884 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-2rgj5" Feb 16 00:19:53 crc kubenswrapper[4698]: I0216 00:19:53.297169 4698 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-r874v" Feb 16 00:19:53 crc kubenswrapper[4698]: I0216 00:19:53.297215 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 00:19:53 crc kubenswrapper[4698]: I0216 00:19:53.297904 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 00:19:53 crc kubenswrapper[4698]: I0216 00:19:53.305988 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-2rgj5"] Feb 16 00:19:53 crc kubenswrapper[4698]: I0216 00:19:53.382796 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5r4v\" (UniqueName: \"kubernetes.io/projected/2661205f-64a5-4fd6-b7a2-a243fb57a87a-kube-api-access-v5r4v\") pod \"cert-manager-webhook-6888856db4-2rgj5\" (UID: \"2661205f-64a5-4fd6-b7a2-a243fb57a87a\") " pod="cert-manager/cert-manager-webhook-6888856db4-2rgj5" Feb 16 00:19:53 crc kubenswrapper[4698]: I0216 00:19:53.382888 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2661205f-64a5-4fd6-b7a2-a243fb57a87a-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-2rgj5\" (UID: \"2661205f-64a5-4fd6-b7a2-a243fb57a87a\") " pod="cert-manager/cert-manager-webhook-6888856db4-2rgj5" Feb 16 00:19:53 crc kubenswrapper[4698]: I0216 00:19:53.483924 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2661205f-64a5-4fd6-b7a2-a243fb57a87a-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-2rgj5\" (UID: \"2661205f-64a5-4fd6-b7a2-a243fb57a87a\") " pod="cert-manager/cert-manager-webhook-6888856db4-2rgj5" Feb 16 00:19:53 crc kubenswrapper[4698]: I0216 00:19:53.484007 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5r4v\" (UniqueName: \"kubernetes.io/projected/2661205f-64a5-4fd6-b7a2-a243fb57a87a-kube-api-access-v5r4v\") pod \"cert-manager-webhook-6888856db4-2rgj5\" (UID: \"2661205f-64a5-4fd6-b7a2-a243fb57a87a\") " pod="cert-manager/cert-manager-webhook-6888856db4-2rgj5" Feb 16 00:19:53 crc kubenswrapper[4698]: I0216 00:19:53.506113 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2661205f-64a5-4fd6-b7a2-a243fb57a87a-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-2rgj5\" (UID: \"2661205f-64a5-4fd6-b7a2-a243fb57a87a\") " pod="cert-manager/cert-manager-webhook-6888856db4-2rgj5" Feb 16 00:19:53 crc kubenswrapper[4698]: I0216 00:19:53.511601 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5r4v\" (UniqueName: \"kubernetes.io/projected/2661205f-64a5-4fd6-b7a2-a243fb57a87a-kube-api-access-v5r4v\") pod \"cert-manager-webhook-6888856db4-2rgj5\" (UID: \"2661205f-64a5-4fd6-b7a2-a243fb57a87a\") " pod="cert-manager/cert-manager-webhook-6888856db4-2rgj5" Feb 16 00:19:53 crc kubenswrapper[4698]: I0216 00:19:53.612143 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-2rgj5" Feb 16 00:19:54 crc kubenswrapper[4698]: I0216 00:19:54.241048 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-2rgj5"] Feb 16 00:19:54 crc kubenswrapper[4698]: W0216 00:19:54.250870 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2661205f_64a5_4fd6_b7a2_a243fb57a87a.slice/crio-70b4cc7479cef6d7177fd40f4e0f36d7062c88c6d1cf0cdb569c63cadde0a158 WatchSource:0}: Error finding container 70b4cc7479cef6d7177fd40f4e0f36d7062c88c6d1cf0cdb569c63cadde0a158: Status 404 returned error can't find the container with id 70b4cc7479cef6d7177fd40f4e0f36d7062c88c6d1cf0cdb569c63cadde0a158 Feb 16 00:19:54 crc kubenswrapper[4698]: I0216 00:19:54.252572 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-h8hzz"] Feb 16 00:19:54 crc kubenswrapper[4698]: I0216 00:19:54.253576 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-h8hzz" Feb 16 00:19:54 crc kubenswrapper[4698]: I0216 00:19:54.255850 4698 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hljs8" Feb 16 00:19:54 crc kubenswrapper[4698]: I0216 00:19:54.264780 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-h8hzz"] Feb 16 00:19:54 crc kubenswrapper[4698]: I0216 00:19:54.300124 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78cfa9e0-d6da-44f0-94ea-8067668d7efa-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-h8hzz\" (UID: \"78cfa9e0-d6da-44f0-94ea-8067668d7efa\") " pod="cert-manager/cert-manager-cainjector-5545bd876-h8hzz" Feb 16 00:19:54 crc kubenswrapper[4698]: I0216 00:19:54.300199 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4crr\" (UniqueName: \"kubernetes.io/projected/78cfa9e0-d6da-44f0-94ea-8067668d7efa-kube-api-access-v4crr\") pod \"cert-manager-cainjector-5545bd876-h8hzz\" (UID: \"78cfa9e0-d6da-44f0-94ea-8067668d7efa\") " pod="cert-manager/cert-manager-cainjector-5545bd876-h8hzz" Feb 16 00:19:54 crc kubenswrapper[4698]: I0216 00:19:54.402164 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4crr\" (UniqueName: \"kubernetes.io/projected/78cfa9e0-d6da-44f0-94ea-8067668d7efa-kube-api-access-v4crr\") pod \"cert-manager-cainjector-5545bd876-h8hzz\" (UID: \"78cfa9e0-d6da-44f0-94ea-8067668d7efa\") " pod="cert-manager/cert-manager-cainjector-5545bd876-h8hzz" Feb 16 00:19:54 crc kubenswrapper[4698]: I0216 00:19:54.402274 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78cfa9e0-d6da-44f0-94ea-8067668d7efa-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-h8hzz\" (UID: \"78cfa9e0-d6da-44f0-94ea-8067668d7efa\") " pod="cert-manager/cert-manager-cainjector-5545bd876-h8hzz" Feb 16 00:19:54 crc kubenswrapper[4698]: I0216 00:19:54.430568 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4crr\" (UniqueName: \"kubernetes.io/projected/78cfa9e0-d6da-44f0-94ea-8067668d7efa-kube-api-access-v4crr\") pod \"cert-manager-cainjector-5545bd876-h8hzz\" (UID: \"78cfa9e0-d6da-44f0-94ea-8067668d7efa\") " pod="cert-manager/cert-manager-cainjector-5545bd876-h8hzz" Feb 16 00:19:54 crc kubenswrapper[4698]: I0216 00:19:54.431043 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78cfa9e0-d6da-44f0-94ea-8067668d7efa-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-h8hzz\" (UID: \"78cfa9e0-d6da-44f0-94ea-8067668d7efa\") " pod="cert-manager/cert-manager-cainjector-5545bd876-h8hzz" Feb 16 00:19:54 crc kubenswrapper[4698]: I0216 00:19:54.451510 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-2rgj5" event={"ID":"2661205f-64a5-4fd6-b7a2-a243fb57a87a","Type":"ContainerStarted","Data":"70b4cc7479cef6d7177fd40f4e0f36d7062c88c6d1cf0cdb569c63cadde0a158"} Feb 16 00:19:54 crc kubenswrapper[4698]: I0216 00:19:54.606365 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-h8hzz" Feb 16 00:19:54 crc kubenswrapper[4698]: W0216 00:19:54.829867 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78cfa9e0_d6da_44f0_94ea_8067668d7efa.slice/crio-7ee0a4acdfc41ed3d68730d98cbf8e2d7727e9aa95790abc831f0d137cd4671e WatchSource:0}: Error finding container 7ee0a4acdfc41ed3d68730d98cbf8e2d7727e9aa95790abc831f0d137cd4671e: Status 404 returned error can't find the container with id 7ee0a4acdfc41ed3d68730d98cbf8e2d7727e9aa95790abc831f0d137cd4671e Feb 16 00:19:54 crc kubenswrapper[4698]: I0216 00:19:54.829956 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-h8hzz"] Feb 16 00:19:55 crc kubenswrapper[4698]: I0216 00:19:55.460666 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-h8hzz" event={"ID":"78cfa9e0-d6da-44f0-94ea-8067668d7efa","Type":"ContainerStarted","Data":"7ee0a4acdfc41ed3d68730d98cbf8e2d7727e9aa95790abc831f0d137cd4671e"} Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.366777 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.368455 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.374533 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.374854 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.375542 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.376217 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-qfmzh" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.395831 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.457272 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56d475bc-2956-4418-8f47-d11656363072-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.457339 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56d475bc-2956-4418-8f47-d11656363072-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.457362 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.457393 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56d475bc-2956-4418-8f47-d11656363072-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.457410 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pkvw\" (UniqueName: \"kubernetes.io/projected/56d475bc-2956-4418-8f47-d11656363072-kube-api-access-2pkvw\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.457671 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.457740 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/56d475bc-2956-4418-8f47-d11656363072-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.457851 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.457968 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.458009 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/56d475bc-2956-4418-8f47-d11656363072-builder-dockercfg-qfmzh-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.458094 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/56d475bc-2956-4418-8f47-d11656363072-builder-dockercfg-qfmzh-push\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.458214 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/56d475bc-2956-4418-8f47-d11656363072-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.559873 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.559943 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/56d475bc-2956-4418-8f47-d11656363072-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.559979 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.560031 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.560041 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/56d475bc-2956-4418-8f47-d11656363072-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.560069 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/56d475bc-2956-4418-8f47-d11656363072-builder-dockercfg-qfmzh-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.560114 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/56d475bc-2956-4418-8f47-d11656363072-builder-dockercfg-qfmzh-push\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.560165 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/56d475bc-2956-4418-8f47-d11656363072-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.560216 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56d475bc-2956-4418-8f47-d11656363072-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.560250 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56d475bc-2956-4418-8f47-d11656363072-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.560279 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.560315 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56d475bc-2956-4418-8f47-d11656363072-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.560341 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pkvw\" (UniqueName: \"kubernetes.io/projected/56d475bc-2956-4418-8f47-d11656363072-kube-api-access-2pkvw\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.560507 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.560521 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56d475bc-2956-4418-8f47-d11656363072-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.560531 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.561582 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.562051 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56d475bc-2956-4418-8f47-d11656363072-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.563641 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/56d475bc-2956-4418-8f47-d11656363072-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.563879 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56d475bc-2956-4418-8f47-d11656363072-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.564035 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.569188 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/56d475bc-2956-4418-8f47-d11656363072-builder-dockercfg-qfmzh-push\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.570447 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/56d475bc-2956-4418-8f47-d11656363072-builder-dockercfg-qfmzh-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.591896 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pkvw\" (UniqueName: \"kubernetes.io/projected/56d475bc-2956-4418-8f47-d11656363072-kube-api-access-2pkvw\") pod \"service-telemetry-operator-1-build\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:57 crc kubenswrapper[4698]: I0216 00:19:57.703785 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:19:58 crc kubenswrapper[4698]: I0216 00:19:58.909350 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 16 00:19:59 crc kubenswrapper[4698]: I0216 00:19:59.490004 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-h8hzz" event={"ID":"78cfa9e0-d6da-44f0-94ea-8067668d7efa","Type":"ContainerStarted","Data":"40fd2cef6af1cd64c5e2076c2b0da154ef1b6ce1a8f7a179055c33e7b1c37f35"} Feb 16 00:19:59 crc kubenswrapper[4698]: I0216 00:19:59.493762 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"56d475bc-2956-4418-8f47-d11656363072","Type":"ContainerStarted","Data":"29d3829d7c8fcfac3410715c690ed32b9d25d99b557be6bbaf0dd6fc49924f8e"} Feb 16 00:19:59 crc kubenswrapper[4698]: I0216 00:19:59.496210 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-2rgj5" event={"ID":"2661205f-64a5-4fd6-b7a2-a243fb57a87a","Type":"ContainerStarted","Data":"58eeb98e1855be46d89d8b664d2ecce51c30506811e9d37129b3c940e7bd6c39"} Feb 16 00:19:59 crc kubenswrapper[4698]: I0216 00:19:59.496994 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-2rgj5" Feb 16 00:19:59 crc kubenswrapper[4698]: I0216 00:19:59.510915 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-h8hzz" podStartSLOduration=1.532273616 podStartE2EDuration="5.510892401s" podCreationTimestamp="2026-02-16 00:19:54 +0000 UTC" firstStartedPulling="2026-02-16 00:19:54.83284991 +0000 UTC m=+804.490748682" lastFinishedPulling="2026-02-16 00:19:58.811468695 +0000 UTC m=+808.469367467" observedRunningTime="2026-02-16 00:19:59.508425174 +0000 UTC m=+809.166323946" watchObservedRunningTime="2026-02-16 00:19:59.510892401 +0000 UTC m=+809.168791183" Feb 16 00:19:59 crc kubenswrapper[4698]: I0216 00:19:59.536529 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-2rgj5" podStartSLOduration=2.000748652 podStartE2EDuration="6.536502858s" podCreationTimestamp="2026-02-16 00:19:53 +0000 UTC" firstStartedPulling="2026-02-16 00:19:54.254718367 +0000 UTC m=+803.912617129" lastFinishedPulling="2026-02-16 00:19:58.790472573 +0000 UTC m=+808.448371335" observedRunningTime="2026-02-16 00:19:59.53496929 +0000 UTC m=+809.192868072" watchObservedRunningTime="2026-02-16 00:19:59.536502858 +0000 UTC m=+809.194401620" Feb 16 00:20:03 crc kubenswrapper[4698]: I0216 00:20:03.322275 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-pjqpd"] Feb 16 00:20:03 crc kubenswrapper[4698]: I0216 00:20:03.324870 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-pjqpd" Feb 16 00:20:03 crc kubenswrapper[4698]: I0216 00:20:03.327719 4698 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-65fqh" Feb 16 00:20:03 crc kubenswrapper[4698]: I0216 00:20:03.359512 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-pjqpd"] Feb 16 00:20:03 crc kubenswrapper[4698]: I0216 00:20:03.364330 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f38cd8e-5e59-4142-9cb0-acd83e924991-bound-sa-token\") pod \"cert-manager-545d4d4674-pjqpd\" (UID: \"8f38cd8e-5e59-4142-9cb0-acd83e924991\") " pod="cert-manager/cert-manager-545d4d4674-pjqpd" Feb 16 00:20:03 crc kubenswrapper[4698]: I0216 00:20:03.364506 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qw2m\" (UniqueName: \"kubernetes.io/projected/8f38cd8e-5e59-4142-9cb0-acd83e924991-kube-api-access-5qw2m\") pod \"cert-manager-545d4d4674-pjqpd\" (UID: \"8f38cd8e-5e59-4142-9cb0-acd83e924991\") " pod="cert-manager/cert-manager-545d4d4674-pjqpd" Feb 16 00:20:03 crc kubenswrapper[4698]: I0216 00:20:03.465809 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qw2m\" (UniqueName: \"kubernetes.io/projected/8f38cd8e-5e59-4142-9cb0-acd83e924991-kube-api-access-5qw2m\") pod \"cert-manager-545d4d4674-pjqpd\" (UID: \"8f38cd8e-5e59-4142-9cb0-acd83e924991\") " pod="cert-manager/cert-manager-545d4d4674-pjqpd" Feb 16 00:20:03 crc kubenswrapper[4698]: I0216 00:20:03.465891 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f38cd8e-5e59-4142-9cb0-acd83e924991-bound-sa-token\") pod \"cert-manager-545d4d4674-pjqpd\" (UID: \"8f38cd8e-5e59-4142-9cb0-acd83e924991\") " pod="cert-manager/cert-manager-545d4d4674-pjqpd" Feb 16 00:20:03 crc kubenswrapper[4698]: I0216 00:20:03.488165 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qw2m\" (UniqueName: \"kubernetes.io/projected/8f38cd8e-5e59-4142-9cb0-acd83e924991-kube-api-access-5qw2m\") pod \"cert-manager-545d4d4674-pjqpd\" (UID: \"8f38cd8e-5e59-4142-9cb0-acd83e924991\") " pod="cert-manager/cert-manager-545d4d4674-pjqpd" Feb 16 00:20:03 crc kubenswrapper[4698]: I0216 00:20:03.498306 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f38cd8e-5e59-4142-9cb0-acd83e924991-bound-sa-token\") pod \"cert-manager-545d4d4674-pjqpd\" (UID: \"8f38cd8e-5e59-4142-9cb0-acd83e924991\") " pod="cert-manager/cert-manager-545d4d4674-pjqpd" Feb 16 00:20:03 crc kubenswrapper[4698]: I0216 00:20:03.616287 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-2rgj5" Feb 16 00:20:03 crc kubenswrapper[4698]: I0216 00:20:03.673280 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-pjqpd" Feb 16 00:20:05 crc kubenswrapper[4698]: I0216 00:20:05.276007 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-pjqpd"] Feb 16 00:20:05 crc kubenswrapper[4698]: W0216 00:20:05.284033 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f38cd8e_5e59_4142_9cb0_acd83e924991.slice/crio-3213954c3b44d131a2c104cb50c3827b640f4ddf88dfbba8a7818c2322296bb8 WatchSource:0}: Error finding container 3213954c3b44d131a2c104cb50c3827b640f4ddf88dfbba8a7818c2322296bb8: Status 404 returned error can't find the container with id 3213954c3b44d131a2c104cb50c3827b640f4ddf88dfbba8a7818c2322296bb8 Feb 16 00:20:05 crc kubenswrapper[4698]: I0216 00:20:05.536841 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-pjqpd" event={"ID":"8f38cd8e-5e59-4142-9cb0-acd83e924991","Type":"ContainerStarted","Data":"301b32bebdc82a96c855814a0f91935a12d6f5771edefb22be6c5e25398e2764"} Feb 16 00:20:05 crc kubenswrapper[4698]: I0216 00:20:05.536904 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-pjqpd" event={"ID":"8f38cd8e-5e59-4142-9cb0-acd83e924991","Type":"ContainerStarted","Data":"3213954c3b44d131a2c104cb50c3827b640f4ddf88dfbba8a7818c2322296bb8"} Feb 16 00:20:05 crc kubenswrapper[4698]: I0216 00:20:05.539261 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d76004b3-8be8-40f4-be5e-e9a792bebce1","Type":"ContainerStarted","Data":"1905ba4e1cf8f1ce1a302e6dc6af74234c014de694182423453fe4621df2eab8"} Feb 16 00:20:05 crc kubenswrapper[4698]: I0216 00:20:05.542360 4698 generic.go:334] "Generic (PLEG): container finished" podID="56d475bc-2956-4418-8f47-d11656363072" containerID="43e3d82de68e8a137e15dc0bd4a0bc3dce79592fbeec134183acf8c3d58fc25c" exitCode=0 Feb 16 00:20:05 crc kubenswrapper[4698]: I0216 00:20:05.542442 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"56d475bc-2956-4418-8f47-d11656363072","Type":"ContainerDied","Data":"43e3d82de68e8a137e15dc0bd4a0bc3dce79592fbeec134183acf8c3d58fc25c"} Feb 16 00:20:05 crc kubenswrapper[4698]: I0216 00:20:05.564322 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-pjqpd" podStartSLOduration=2.564301584 podStartE2EDuration="2.564301584s" podCreationTimestamp="2026-02-16 00:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:20:05.559645989 +0000 UTC m=+815.217544771" watchObservedRunningTime="2026-02-16 00:20:05.564301584 +0000 UTC m=+815.222200356" Feb 16 00:20:06 crc kubenswrapper[4698]: I0216 00:20:06.551430 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"56d475bc-2956-4418-8f47-d11656363072","Type":"ContainerStarted","Data":"809d0b56868694d2dc0066812102c45487d0383117f33e4142484075d8a559f8"} Feb 16 00:20:06 crc kubenswrapper[4698]: I0216 00:20:06.580559 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-1-build" podStartSLOduration=3.719279779 podStartE2EDuration="9.580533144s" podCreationTimestamp="2026-02-16 00:19:57 +0000 UTC" firstStartedPulling="2026-02-16 00:19:58.919645331 +0000 UTC m=+808.577544093" lastFinishedPulling="2026-02-16 00:20:04.780898696 +0000 UTC m=+814.438797458" observedRunningTime="2026-02-16 00:20:06.576367185 +0000 UTC m=+816.234265947" watchObservedRunningTime="2026-02-16 00:20:06.580533144 +0000 UTC m=+816.238431906" Feb 16 00:20:07 crc kubenswrapper[4698]: I0216 00:20:07.471437 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 16 00:20:07 crc kubenswrapper[4698]: I0216 00:20:07.559580 4698 generic.go:334] "Generic (PLEG): container finished" podID="d76004b3-8be8-40f4-be5e-e9a792bebce1" containerID="1905ba4e1cf8f1ce1a302e6dc6af74234c014de694182423453fe4621df2eab8" exitCode=0 Feb 16 00:20:07 crc kubenswrapper[4698]: I0216 00:20:07.559671 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d76004b3-8be8-40f4-be5e-e9a792bebce1","Type":"ContainerDied","Data":"1905ba4e1cf8f1ce1a302e6dc6af74234c014de694182423453fe4621df2eab8"} Feb 16 00:20:08 crc kubenswrapper[4698]: I0216 00:20:08.570034 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="56d475bc-2956-4418-8f47-d11656363072" containerName="docker-build" containerID="cri-o://809d0b56868694d2dc0066812102c45487d0383117f33e4142484075d8a559f8" gracePeriod=30 Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.331278 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.332479 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.335202 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.335220 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.335306 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.365366 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.365464 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.365525 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c78cb8a1-ca8c-463e-af58-225ca77c241b-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.365577 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smd7s\" (UniqueName: \"kubernetes.io/projected/c78cb8a1-ca8c-463e-af58-225ca77c241b-kube-api-access-smd7s\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.365767 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.365887 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/c78cb8a1-ca8c-463e-af58-225ca77c241b-builder-dockercfg-qfmzh-push\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.365947 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c78cb8a1-ca8c-463e-af58-225ca77c241b-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.366116 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.366217 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/c78cb8a1-ca8c-463e-af58-225ca77c241b-builder-dockercfg-qfmzh-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.366255 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.366349 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.366402 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.369529 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.468600 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/c78cb8a1-ca8c-463e-af58-225ca77c241b-builder-dockercfg-qfmzh-push\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.468717 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c78cb8a1-ca8c-463e-af58-225ca77c241b-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.468768 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.468814 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.468847 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/c78cb8a1-ca8c-463e-af58-225ca77c241b-builder-dockercfg-qfmzh-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.468881 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c78cb8a1-ca8c-463e-af58-225ca77c241b-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.468905 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.469009 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.469080 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.469104 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.469161 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c78cb8a1-ca8c-463e-af58-225ca77c241b-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.469186 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smd7s\" (UniqueName: \"kubernetes.io/projected/c78cb8a1-ca8c-463e-af58-225ca77c241b-kube-api-access-smd7s\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.469235 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.469674 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.469873 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.469947 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.469883 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c78cb8a1-ca8c-463e-af58-225ca77c241b-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.470279 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.470797 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.470858 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.471010 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.475742 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/c78cb8a1-ca8c-463e-af58-225ca77c241b-builder-dockercfg-qfmzh-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.477071 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/c78cb8a1-ca8c-463e-af58-225ca77c241b-builder-dockercfg-qfmzh-push\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.493645 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smd7s\" (UniqueName: \"kubernetes.io/projected/c78cb8a1-ca8c-463e-af58-225ca77c241b-kube-api-access-smd7s\") pod \"service-telemetry-operator-2-build\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:09 crc kubenswrapper[4698]: I0216 00:20:09.653898 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:20:11 crc kubenswrapper[4698]: I0216 00:20:11.435678 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 16 00:20:11 crc kubenswrapper[4698]: W0216 00:20:11.440891 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc78cb8a1_ca8c_463e_af58_225ca77c241b.slice/crio-66f6551f4c21ab12855c7c243d9875c52843c40b37fc5f5c60e19d72d2b0c48d WatchSource:0}: Error finding container 66f6551f4c21ab12855c7c243d9875c52843c40b37fc5f5c60e19d72d2b0c48d: Status 404 returned error can't find the container with id 66f6551f4c21ab12855c7c243d9875c52843c40b37fc5f5c60e19d72d2b0c48d Feb 16 00:20:11 crc kubenswrapper[4698]: I0216 00:20:11.595950 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"c78cb8a1-ca8c-463e-af58-225ca77c241b","Type":"ContainerStarted","Data":"66f6551f4c21ab12855c7c243d9875c52843c40b37fc5f5c60e19d72d2b0c48d"} Feb 16 00:20:12 crc kubenswrapper[4698]: I0216 00:20:12.604995 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"c78cb8a1-ca8c-463e-af58-225ca77c241b","Type":"ContainerStarted","Data":"e31d9dd85e4fa6b04ce425fbc3a98c83647fc4ec232ab4d7b3477946d09a6f92"} Feb 16 00:20:12 crc kubenswrapper[4698]: I0216 00:20:12.611531 4698 generic.go:334] "Generic (PLEG): container finished" podID="d76004b3-8be8-40f4-be5e-e9a792bebce1" containerID="797fc4f8bb57f2a6bfb9e9c81d4491d990d31637b9345f6f67837067aa069169" exitCode=0 Feb 16 00:20:12 crc kubenswrapper[4698]: I0216 00:20:12.611628 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d76004b3-8be8-40f4-be5e-e9a792bebce1","Type":"ContainerDied","Data":"797fc4f8bb57f2a6bfb9e9c81d4491d990d31637b9345f6f67837067aa069169"} Feb 16 00:20:12 crc kubenswrapper[4698]: I0216 00:20:12.616113 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_56d475bc-2956-4418-8f47-d11656363072/docker-build/0.log" Feb 16 00:20:12 crc kubenswrapper[4698]: I0216 00:20:12.616666 4698 generic.go:334] "Generic (PLEG): container finished" podID="56d475bc-2956-4418-8f47-d11656363072" containerID="809d0b56868694d2dc0066812102c45487d0383117f33e4142484075d8a559f8" exitCode=1 Feb 16 00:20:12 crc kubenswrapper[4698]: I0216 00:20:12.616702 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"56d475bc-2956-4418-8f47-d11656363072","Type":"ContainerDied","Data":"809d0b56868694d2dc0066812102c45487d0383117f33e4142484075d8a559f8"} Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.056862 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_56d475bc-2956-4418-8f47-d11656363072/docker-build/0.log" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.057485 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.235224 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-container-storage-root\") pod \"56d475bc-2956-4418-8f47-d11656363072\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.235872 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/56d475bc-2956-4418-8f47-d11656363072-buildcachedir\") pod \"56d475bc-2956-4418-8f47-d11656363072\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.235932 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/56d475bc-2956-4418-8f47-d11656363072-build-system-configs\") pod \"56d475bc-2956-4418-8f47-d11656363072\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.236160 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d475bc-2956-4418-8f47-d11656363072-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "56d475bc-2956-4418-8f47-d11656363072" (UID: "56d475bc-2956-4418-8f47-d11656363072"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.236353 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pkvw\" (UniqueName: \"kubernetes.io/projected/56d475bc-2956-4418-8f47-d11656363072-kube-api-access-2pkvw\") pod \"56d475bc-2956-4418-8f47-d11656363072\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.236399 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/56d475bc-2956-4418-8f47-d11656363072-builder-dockercfg-qfmzh-pull\") pod \"56d475bc-2956-4418-8f47-d11656363072\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.236440 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/56d475bc-2956-4418-8f47-d11656363072-builder-dockercfg-qfmzh-push\") pod \"56d475bc-2956-4418-8f47-d11656363072\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.236478 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56d475bc-2956-4418-8f47-d11656363072-build-proxy-ca-bundles\") pod \"56d475bc-2956-4418-8f47-d11656363072\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.236532 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-build-blob-cache\") pod \"56d475bc-2956-4418-8f47-d11656363072\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.236691 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-container-storage-run\") pod \"56d475bc-2956-4418-8f47-d11656363072\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.236733 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56d475bc-2956-4418-8f47-d11656363072-node-pullsecrets\") pod \"56d475bc-2956-4418-8f47-d11656363072\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.236792 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56d475bc-2956-4418-8f47-d11656363072-build-ca-bundles\") pod \"56d475bc-2956-4418-8f47-d11656363072\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.236800 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d475bc-2956-4418-8f47-d11656363072-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "56d475bc-2956-4418-8f47-d11656363072" (UID: "56d475bc-2956-4418-8f47-d11656363072"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.236894 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "56d475bc-2956-4418-8f47-d11656363072" (UID: "56d475bc-2956-4418-8f47-d11656363072"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.236928 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-buildworkdir\") pod \"56d475bc-2956-4418-8f47-d11656363072\" (UID: \"56d475bc-2956-4418-8f47-d11656363072\") " Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.237200 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d475bc-2956-4418-8f47-d11656363072-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "56d475bc-2956-4418-8f47-d11656363072" (UID: "56d475bc-2956-4418-8f47-d11656363072"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.237541 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d475bc-2956-4418-8f47-d11656363072-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "56d475bc-2956-4418-8f47-d11656363072" (UID: "56d475bc-2956-4418-8f47-d11656363072"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.237869 4698 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56d475bc-2956-4418-8f47-d11656363072-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.237883 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.237895 4698 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/56d475bc-2956-4418-8f47-d11656363072-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.237907 4698 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/56d475bc-2956-4418-8f47-d11656363072-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.237916 4698 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56d475bc-2956-4418-8f47-d11656363072-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.238012 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "56d475bc-2956-4418-8f47-d11656363072" (UID: "56d475bc-2956-4418-8f47-d11656363072"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.238100 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "56d475bc-2956-4418-8f47-d11656363072" (UID: "56d475bc-2956-4418-8f47-d11656363072"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.238256 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d475bc-2956-4418-8f47-d11656363072-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "56d475bc-2956-4418-8f47-d11656363072" (UID: "56d475bc-2956-4418-8f47-d11656363072"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.238439 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "56d475bc-2956-4418-8f47-d11656363072" (UID: "56d475bc-2956-4418-8f47-d11656363072"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.253814 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d475bc-2956-4418-8f47-d11656363072-kube-api-access-2pkvw" (OuterVolumeSpecName: "kube-api-access-2pkvw") pod "56d475bc-2956-4418-8f47-d11656363072" (UID: "56d475bc-2956-4418-8f47-d11656363072"). InnerVolumeSpecName "kube-api-access-2pkvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.253819 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d475bc-2956-4418-8f47-d11656363072-builder-dockercfg-qfmzh-pull" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-pull") pod "56d475bc-2956-4418-8f47-d11656363072" (UID: "56d475bc-2956-4418-8f47-d11656363072"). InnerVolumeSpecName "builder-dockercfg-qfmzh-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.253865 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d475bc-2956-4418-8f47-d11656363072-builder-dockercfg-qfmzh-push" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-push") pod "56d475bc-2956-4418-8f47-d11656363072" (UID: "56d475bc-2956-4418-8f47-d11656363072"). InnerVolumeSpecName "builder-dockercfg-qfmzh-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.338520 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/56d475bc-2956-4418-8f47-d11656363072-builder-dockercfg-qfmzh-pull\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.338928 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pkvw\" (UniqueName: \"kubernetes.io/projected/56d475bc-2956-4418-8f47-d11656363072-kube-api-access-2pkvw\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.338943 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/56d475bc-2956-4418-8f47-d11656363072-builder-dockercfg-qfmzh-push\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.338961 4698 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.338973 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.338984 4698 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56d475bc-2956-4418-8f47-d11656363072-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.338995 4698 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/56d475bc-2956-4418-8f47-d11656363072-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.625726 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d76004b3-8be8-40f4-be5e-e9a792bebce1","Type":"ContainerStarted","Data":"e1ab17ac0b656b9590c7e051933ffa99bd3760c0ba8d32e772fdc14dadbf50c0"} Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.626862 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.629715 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_56d475bc-2956-4418-8f47-d11656363072/docker-build/0.log" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.630309 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"56d475bc-2956-4418-8f47-d11656363072","Type":"ContainerDied","Data":"29d3829d7c8fcfac3410715c690ed32b9d25d99b557be6bbaf0dd6fc49924f8e"} Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.630341 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.630389 4698 scope.go:117] "RemoveContainer" containerID="809d0b56868694d2dc0066812102c45487d0383117f33e4142484075d8a559f8" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.655113 4698 scope.go:117] "RemoveContainer" containerID="43e3d82de68e8a137e15dc0bd4a0bc3dce79592fbeec134183acf8c3d58fc25c" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.676033 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=9.955213413 podStartE2EDuration="47.67600385s" podCreationTimestamp="2026-02-16 00:19:26 +0000 UTC" firstStartedPulling="2026-02-16 00:19:27.16014036 +0000 UTC m=+776.818039122" lastFinishedPulling="2026-02-16 00:20:04.880930807 +0000 UTC m=+814.538829559" observedRunningTime="2026-02-16 00:20:13.662596542 +0000 UTC m=+823.320495304" watchObservedRunningTime="2026-02-16 00:20:13.67600385 +0000 UTC m=+823.333902612" Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.689710 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 16 00:20:13 crc kubenswrapper[4698]: I0216 00:20:13.707813 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 16 00:20:15 crc kubenswrapper[4698]: I0216 00:20:15.240911 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d475bc-2956-4418-8f47-d11656363072" path="/var/lib/kubelet/pods/56d475bc-2956-4418-8f47-d11656363072/volumes" Feb 16 00:20:21 crc kubenswrapper[4698]: I0216 00:20:21.681933 4698 generic.go:334] "Generic (PLEG): container finished" podID="c78cb8a1-ca8c-463e-af58-225ca77c241b" containerID="e31d9dd85e4fa6b04ce425fbc3a98c83647fc4ec232ab4d7b3477946d09a6f92" exitCode=0 Feb 16 00:20:21 crc kubenswrapper[4698]: I0216 00:20:21.682030 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"c78cb8a1-ca8c-463e-af58-225ca77c241b","Type":"ContainerDied","Data":"e31d9dd85e4fa6b04ce425fbc3a98c83647fc4ec232ab4d7b3477946d09a6f92"} Feb 16 00:20:22 crc kubenswrapper[4698]: I0216 00:20:22.695221 4698 generic.go:334] "Generic (PLEG): container finished" podID="c78cb8a1-ca8c-463e-af58-225ca77c241b" containerID="02187a8c3e5705c25f8cf5ca1fa4ea8fadce1f79100b4f9c9a42ae34cfe90379" exitCode=0 Feb 16 00:20:22 crc kubenswrapper[4698]: I0216 00:20:22.695798 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"c78cb8a1-ca8c-463e-af58-225ca77c241b","Type":"ContainerDied","Data":"02187a8c3e5705c25f8cf5ca1fa4ea8fadce1f79100b4f9c9a42ae34cfe90379"} Feb 16 00:20:22 crc kubenswrapper[4698]: I0216 00:20:22.746923 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_c78cb8a1-ca8c-463e-af58-225ca77c241b/manage-dockerfile/0.log" Feb 16 00:20:23 crc kubenswrapper[4698]: I0216 00:20:23.705441 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"c78cb8a1-ca8c-463e-af58-225ca77c241b","Type":"ContainerStarted","Data":"b62f20cf630b1c04bfbd8546d0950c0d633bf893804adbb9f96e69023e95037b"} Feb 16 00:20:26 crc kubenswrapper[4698]: I0216 00:20:26.905918 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="d76004b3-8be8-40f4-be5e-e9a792bebce1" containerName="elasticsearch" probeResult="failure" output=< Feb 16 00:20:26 crc kubenswrapper[4698]: {"timestamp": "2026-02-16T00:20:26+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 16 00:20:26 crc kubenswrapper[4698]: > Feb 16 00:20:32 crc kubenswrapper[4698]: I0216 00:20:32.225899 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:32 crc kubenswrapper[4698]: I0216 00:20:32.273601 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=23.27357422 podStartE2EDuration="23.27357422s" podCreationTimestamp="2026-02-16 00:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:20:23.749703313 +0000 UTC m=+833.407602125" watchObservedRunningTime="2026-02-16 00:20:32.27357422 +0000 UTC m=+841.931472982" Feb 16 00:21:14 crc kubenswrapper[4698]: I0216 00:21:14.769533 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bj7vs"] Feb 16 00:21:14 crc kubenswrapper[4698]: E0216 00:21:14.770520 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d475bc-2956-4418-8f47-d11656363072" containerName="docker-build" Feb 16 00:21:14 crc kubenswrapper[4698]: I0216 00:21:14.770540 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d475bc-2956-4418-8f47-d11656363072" containerName="docker-build" Feb 16 00:21:14 crc kubenswrapper[4698]: E0216 00:21:14.770558 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d475bc-2956-4418-8f47-d11656363072" containerName="manage-dockerfile" Feb 16 00:21:14 crc kubenswrapper[4698]: I0216 00:21:14.770566 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d475bc-2956-4418-8f47-d11656363072" containerName="manage-dockerfile" Feb 16 00:21:14 crc kubenswrapper[4698]: I0216 00:21:14.770752 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d475bc-2956-4418-8f47-d11656363072" containerName="docker-build" Feb 16 00:21:14 crc kubenswrapper[4698]: I0216 00:21:14.771792 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bj7vs" Feb 16 00:21:14 crc kubenswrapper[4698]: I0216 00:21:14.788343 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bj7vs"] Feb 16 00:21:14 crc kubenswrapper[4698]: I0216 00:21:14.955238 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2lw6\" (UniqueName: \"kubernetes.io/projected/6324abe0-286f-4e82-8f3a-f51caa528b3c-kube-api-access-z2lw6\") pod \"community-operators-bj7vs\" (UID: \"6324abe0-286f-4e82-8f3a-f51caa528b3c\") " pod="openshift-marketplace/community-operators-bj7vs" Feb 16 00:21:14 crc kubenswrapper[4698]: I0216 00:21:14.955322 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6324abe0-286f-4e82-8f3a-f51caa528b3c-utilities\") pod \"community-operators-bj7vs\" (UID: \"6324abe0-286f-4e82-8f3a-f51caa528b3c\") " pod="openshift-marketplace/community-operators-bj7vs" Feb 16 00:21:14 crc kubenswrapper[4698]: I0216 00:21:14.955342 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6324abe0-286f-4e82-8f3a-f51caa528b3c-catalog-content\") pod \"community-operators-bj7vs\" (UID: \"6324abe0-286f-4e82-8f3a-f51caa528b3c\") " pod="openshift-marketplace/community-operators-bj7vs" Feb 16 00:21:15 crc kubenswrapper[4698]: I0216 00:21:15.057254 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2lw6\" (UniqueName: \"kubernetes.io/projected/6324abe0-286f-4e82-8f3a-f51caa528b3c-kube-api-access-z2lw6\") pod \"community-operators-bj7vs\" (UID: \"6324abe0-286f-4e82-8f3a-f51caa528b3c\") " pod="openshift-marketplace/community-operators-bj7vs" Feb 16 00:21:15 crc kubenswrapper[4698]: I0216 00:21:15.057312 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6324abe0-286f-4e82-8f3a-f51caa528b3c-utilities\") pod \"community-operators-bj7vs\" (UID: \"6324abe0-286f-4e82-8f3a-f51caa528b3c\") " pod="openshift-marketplace/community-operators-bj7vs" Feb 16 00:21:15 crc kubenswrapper[4698]: I0216 00:21:15.057333 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6324abe0-286f-4e82-8f3a-f51caa528b3c-catalog-content\") pod \"community-operators-bj7vs\" (UID: \"6324abe0-286f-4e82-8f3a-f51caa528b3c\") " pod="openshift-marketplace/community-operators-bj7vs" Feb 16 00:21:15 crc kubenswrapper[4698]: I0216 00:21:15.057834 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6324abe0-286f-4e82-8f3a-f51caa528b3c-catalog-content\") pod \"community-operators-bj7vs\" (UID: \"6324abe0-286f-4e82-8f3a-f51caa528b3c\") " pod="openshift-marketplace/community-operators-bj7vs" Feb 16 00:21:15 crc kubenswrapper[4698]: I0216 00:21:15.058017 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6324abe0-286f-4e82-8f3a-f51caa528b3c-utilities\") pod \"community-operators-bj7vs\" (UID: \"6324abe0-286f-4e82-8f3a-f51caa528b3c\") " pod="openshift-marketplace/community-operators-bj7vs" Feb 16 00:21:15 crc kubenswrapper[4698]: I0216 00:21:15.085590 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2lw6\" (UniqueName: \"kubernetes.io/projected/6324abe0-286f-4e82-8f3a-f51caa528b3c-kube-api-access-z2lw6\") pod \"community-operators-bj7vs\" (UID: \"6324abe0-286f-4e82-8f3a-f51caa528b3c\") " pod="openshift-marketplace/community-operators-bj7vs" Feb 16 00:21:15 crc kubenswrapper[4698]: I0216 00:21:15.135311 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bj7vs" Feb 16 00:21:15 crc kubenswrapper[4698]: I0216 00:21:15.601030 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bj7vs"] Feb 16 00:21:16 crc kubenswrapper[4698]: I0216 00:21:16.098183 4698 generic.go:334] "Generic (PLEG): container finished" podID="6324abe0-286f-4e82-8f3a-f51caa528b3c" containerID="4515ebe1bc0c23ad192d76ad2b62932baf8cb176a941e9193a4451569e07ccd7" exitCode=0 Feb 16 00:21:16 crc kubenswrapper[4698]: I0216 00:21:16.098383 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj7vs" event={"ID":"6324abe0-286f-4e82-8f3a-f51caa528b3c","Type":"ContainerDied","Data":"4515ebe1bc0c23ad192d76ad2b62932baf8cb176a941e9193a4451569e07ccd7"} Feb 16 00:21:16 crc kubenswrapper[4698]: I0216 00:21:16.098524 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj7vs" event={"ID":"6324abe0-286f-4e82-8f3a-f51caa528b3c","Type":"ContainerStarted","Data":"a6071a7a2e776da014f699abf59eab217c11caaaa461b5a0819e369e50eb9a88"} Feb 16 00:21:18 crc kubenswrapper[4698]: I0216 00:21:18.111846 4698 generic.go:334] "Generic (PLEG): container finished" podID="6324abe0-286f-4e82-8f3a-f51caa528b3c" containerID="b78f015691084c68b2a28dc14d51dc0d1475f409f4ca71f84a2ec0481422cb58" exitCode=0 Feb 16 00:21:18 crc kubenswrapper[4698]: I0216 00:21:18.111962 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj7vs" event={"ID":"6324abe0-286f-4e82-8f3a-f51caa528b3c","Type":"ContainerDied","Data":"b78f015691084c68b2a28dc14d51dc0d1475f409f4ca71f84a2ec0481422cb58"} Feb 16 00:21:20 crc kubenswrapper[4698]: I0216 00:21:20.127086 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj7vs" event={"ID":"6324abe0-286f-4e82-8f3a-f51caa528b3c","Type":"ContainerStarted","Data":"3065066b3c66f2837a4120c3b2360a05f56c568bd68f9c29a6dd7bd02561c747"} Feb 16 00:21:20 crc kubenswrapper[4698]: I0216 00:21:20.154435 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bj7vs" podStartSLOduration=3.14131112 podStartE2EDuration="6.154406372s" podCreationTimestamp="2026-02-16 00:21:14 +0000 UTC" firstStartedPulling="2026-02-16 00:21:16.100471814 +0000 UTC m=+885.758370576" lastFinishedPulling="2026-02-16 00:21:19.113567026 +0000 UTC m=+888.771465828" observedRunningTime="2026-02-16 00:21:20.147865408 +0000 UTC m=+889.805764210" watchObservedRunningTime="2026-02-16 00:21:20.154406372 +0000 UTC m=+889.812305174" Feb 16 00:21:25 crc kubenswrapper[4698]: I0216 00:21:25.135785 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bj7vs" Feb 16 00:21:25 crc kubenswrapper[4698]: I0216 00:21:25.136175 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bj7vs" Feb 16 00:21:25 crc kubenswrapper[4698]: I0216 00:21:25.201174 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bj7vs" Feb 16 00:21:25 crc kubenswrapper[4698]: I0216 00:21:25.255550 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bj7vs" Feb 16 00:21:25 crc kubenswrapper[4698]: I0216 00:21:25.450755 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bj7vs"] Feb 16 00:21:27 crc kubenswrapper[4698]: I0216 00:21:27.046511 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:21:27 crc kubenswrapper[4698]: I0216 00:21:27.046605 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:21:27 crc kubenswrapper[4698]: I0216 00:21:27.177172 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bj7vs" podUID="6324abe0-286f-4e82-8f3a-f51caa528b3c" containerName="registry-server" containerID="cri-o://3065066b3c66f2837a4120c3b2360a05f56c568bd68f9c29a6dd7bd02561c747" gracePeriod=2 Feb 16 00:21:27 crc kubenswrapper[4698]: I0216 00:21:27.627926 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bj7vs" Feb 16 00:21:27 crc kubenswrapper[4698]: I0216 00:21:27.664843 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6324abe0-286f-4e82-8f3a-f51caa528b3c-catalog-content\") pod \"6324abe0-286f-4e82-8f3a-f51caa528b3c\" (UID: \"6324abe0-286f-4e82-8f3a-f51caa528b3c\") " Feb 16 00:21:27 crc kubenswrapper[4698]: I0216 00:21:27.664919 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6324abe0-286f-4e82-8f3a-f51caa528b3c-utilities\") pod \"6324abe0-286f-4e82-8f3a-f51caa528b3c\" (UID: \"6324abe0-286f-4e82-8f3a-f51caa528b3c\") " Feb 16 00:21:27 crc kubenswrapper[4698]: I0216 00:21:27.664991 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2lw6\" (UniqueName: \"kubernetes.io/projected/6324abe0-286f-4e82-8f3a-f51caa528b3c-kube-api-access-z2lw6\") pod \"6324abe0-286f-4e82-8f3a-f51caa528b3c\" (UID: \"6324abe0-286f-4e82-8f3a-f51caa528b3c\") " Feb 16 00:21:27 crc kubenswrapper[4698]: I0216 00:21:27.666108 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6324abe0-286f-4e82-8f3a-f51caa528b3c-utilities" (OuterVolumeSpecName: "utilities") pod "6324abe0-286f-4e82-8f3a-f51caa528b3c" (UID: "6324abe0-286f-4e82-8f3a-f51caa528b3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:21:27 crc kubenswrapper[4698]: I0216 00:21:27.676196 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6324abe0-286f-4e82-8f3a-f51caa528b3c-kube-api-access-z2lw6" (OuterVolumeSpecName: "kube-api-access-z2lw6") pod "6324abe0-286f-4e82-8f3a-f51caa528b3c" (UID: "6324abe0-286f-4e82-8f3a-f51caa528b3c"). InnerVolumeSpecName "kube-api-access-z2lw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:21:27 crc kubenswrapper[4698]: I0216 00:21:27.769077 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6324abe0-286f-4e82-8f3a-f51caa528b3c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:21:27 crc kubenswrapper[4698]: I0216 00:21:27.769137 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2lw6\" (UniqueName: \"kubernetes.io/projected/6324abe0-286f-4e82-8f3a-f51caa528b3c-kube-api-access-z2lw6\") on node \"crc\" DevicePath \"\"" Feb 16 00:21:27 crc kubenswrapper[4698]: I0216 00:21:27.776500 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6324abe0-286f-4e82-8f3a-f51caa528b3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6324abe0-286f-4e82-8f3a-f51caa528b3c" (UID: "6324abe0-286f-4e82-8f3a-f51caa528b3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:21:27 crc kubenswrapper[4698]: I0216 00:21:27.870706 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6324abe0-286f-4e82-8f3a-f51caa528b3c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:21:28 crc kubenswrapper[4698]: I0216 00:21:28.188051 4698 generic.go:334] "Generic (PLEG): container finished" podID="6324abe0-286f-4e82-8f3a-f51caa528b3c" containerID="3065066b3c66f2837a4120c3b2360a05f56c568bd68f9c29a6dd7bd02561c747" exitCode=0 Feb 16 00:21:28 crc kubenswrapper[4698]: I0216 00:21:28.188098 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj7vs" event={"ID":"6324abe0-286f-4e82-8f3a-f51caa528b3c","Type":"ContainerDied","Data":"3065066b3c66f2837a4120c3b2360a05f56c568bd68f9c29a6dd7bd02561c747"} Feb 16 00:21:28 crc kubenswrapper[4698]: I0216 00:21:28.188213 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj7vs" event={"ID":"6324abe0-286f-4e82-8f3a-f51caa528b3c","Type":"ContainerDied","Data":"a6071a7a2e776da014f699abf59eab217c11caaaa461b5a0819e369e50eb9a88"} Feb 16 00:21:28 crc kubenswrapper[4698]: I0216 00:21:28.188214 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bj7vs" Feb 16 00:21:28 crc kubenswrapper[4698]: I0216 00:21:28.188251 4698 scope.go:117] "RemoveContainer" containerID="3065066b3c66f2837a4120c3b2360a05f56c568bd68f9c29a6dd7bd02561c747" Feb 16 00:21:28 crc kubenswrapper[4698]: I0216 00:21:28.216332 4698 scope.go:117] "RemoveContainer" containerID="b78f015691084c68b2a28dc14d51dc0d1475f409f4ca71f84a2ec0481422cb58" Feb 16 00:21:28 crc kubenswrapper[4698]: I0216 00:21:28.244250 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bj7vs"] Feb 16 00:21:28 crc kubenswrapper[4698]: I0216 00:21:28.250996 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bj7vs"] Feb 16 00:21:28 crc kubenswrapper[4698]: I0216 00:21:28.251184 4698 scope.go:117] "RemoveContainer" containerID="4515ebe1bc0c23ad192d76ad2b62932baf8cb176a941e9193a4451569e07ccd7" Feb 16 00:21:28 crc kubenswrapper[4698]: I0216 00:21:28.270899 4698 scope.go:117] "RemoveContainer" containerID="3065066b3c66f2837a4120c3b2360a05f56c568bd68f9c29a6dd7bd02561c747" Feb 16 00:21:28 crc kubenswrapper[4698]: E0216 00:21:28.271492 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3065066b3c66f2837a4120c3b2360a05f56c568bd68f9c29a6dd7bd02561c747\": container with ID starting with 3065066b3c66f2837a4120c3b2360a05f56c568bd68f9c29a6dd7bd02561c747 not found: ID does not exist" containerID="3065066b3c66f2837a4120c3b2360a05f56c568bd68f9c29a6dd7bd02561c747" Feb 16 00:21:28 crc kubenswrapper[4698]: I0216 00:21:28.271560 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3065066b3c66f2837a4120c3b2360a05f56c568bd68f9c29a6dd7bd02561c747"} err="failed to get container status \"3065066b3c66f2837a4120c3b2360a05f56c568bd68f9c29a6dd7bd02561c747\": rpc error: code = NotFound desc = could not find container \"3065066b3c66f2837a4120c3b2360a05f56c568bd68f9c29a6dd7bd02561c747\": container with ID starting with 3065066b3c66f2837a4120c3b2360a05f56c568bd68f9c29a6dd7bd02561c747 not found: ID does not exist" Feb 16 00:21:28 crc kubenswrapper[4698]: I0216 00:21:28.271605 4698 scope.go:117] "RemoveContainer" containerID="b78f015691084c68b2a28dc14d51dc0d1475f409f4ca71f84a2ec0481422cb58" Feb 16 00:21:28 crc kubenswrapper[4698]: E0216 00:21:28.272254 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b78f015691084c68b2a28dc14d51dc0d1475f409f4ca71f84a2ec0481422cb58\": container with ID starting with b78f015691084c68b2a28dc14d51dc0d1475f409f4ca71f84a2ec0481422cb58 not found: ID does not exist" containerID="b78f015691084c68b2a28dc14d51dc0d1475f409f4ca71f84a2ec0481422cb58" Feb 16 00:21:28 crc kubenswrapper[4698]: I0216 00:21:28.272309 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b78f015691084c68b2a28dc14d51dc0d1475f409f4ca71f84a2ec0481422cb58"} err="failed to get container status \"b78f015691084c68b2a28dc14d51dc0d1475f409f4ca71f84a2ec0481422cb58\": rpc error: code = NotFound desc = could not find container \"b78f015691084c68b2a28dc14d51dc0d1475f409f4ca71f84a2ec0481422cb58\": container with ID starting with b78f015691084c68b2a28dc14d51dc0d1475f409f4ca71f84a2ec0481422cb58 not found: ID does not exist" Feb 16 00:21:28 crc kubenswrapper[4698]: I0216 00:21:28.272346 4698 scope.go:117] "RemoveContainer" containerID="4515ebe1bc0c23ad192d76ad2b62932baf8cb176a941e9193a4451569e07ccd7" Feb 16 00:21:28 crc kubenswrapper[4698]: E0216 00:21:28.272774 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4515ebe1bc0c23ad192d76ad2b62932baf8cb176a941e9193a4451569e07ccd7\": container with ID starting with 4515ebe1bc0c23ad192d76ad2b62932baf8cb176a941e9193a4451569e07ccd7 not found: ID does not exist" containerID="4515ebe1bc0c23ad192d76ad2b62932baf8cb176a941e9193a4451569e07ccd7" Feb 16 00:21:28 crc kubenswrapper[4698]: I0216 00:21:28.272824 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4515ebe1bc0c23ad192d76ad2b62932baf8cb176a941e9193a4451569e07ccd7"} err="failed to get container status \"4515ebe1bc0c23ad192d76ad2b62932baf8cb176a941e9193a4451569e07ccd7\": rpc error: code = NotFound desc = could not find container \"4515ebe1bc0c23ad192d76ad2b62932baf8cb176a941e9193a4451569e07ccd7\": container with ID starting with 4515ebe1bc0c23ad192d76ad2b62932baf8cb176a941e9193a4451569e07ccd7 not found: ID does not exist" Feb 16 00:21:29 crc kubenswrapper[4698]: I0216 00:21:29.245054 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6324abe0-286f-4e82-8f3a-f51caa528b3c" path="/var/lib/kubelet/pods/6324abe0-286f-4e82-8f3a-f51caa528b3c/volumes" Feb 16 00:21:48 crc kubenswrapper[4698]: I0216 00:21:48.350482 4698 generic.go:334] "Generic (PLEG): container finished" podID="c78cb8a1-ca8c-463e-af58-225ca77c241b" containerID="b62f20cf630b1c04bfbd8546d0950c0d633bf893804adbb9f96e69023e95037b" exitCode=0 Feb 16 00:21:48 crc kubenswrapper[4698]: I0216 00:21:48.350569 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"c78cb8a1-ca8c-463e-af58-225ca77c241b","Type":"ContainerDied","Data":"b62f20cf630b1c04bfbd8546d0950c0d633bf893804adbb9f96e69023e95037b"} Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.655812 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.816978 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smd7s\" (UniqueName: \"kubernetes.io/projected/c78cb8a1-ca8c-463e-af58-225ca77c241b-kube-api-access-smd7s\") pod \"c78cb8a1-ca8c-463e-af58-225ca77c241b\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.817075 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/c78cb8a1-ca8c-463e-af58-225ca77c241b-builder-dockercfg-qfmzh-pull\") pod \"c78cb8a1-ca8c-463e-af58-225ca77c241b\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.817151 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-buildworkdir\") pod \"c78cb8a1-ca8c-463e-af58-225ca77c241b\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.817189 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c78cb8a1-ca8c-463e-af58-225ca77c241b-buildcachedir\") pod \"c78cb8a1-ca8c-463e-af58-225ca77c241b\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.817318 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-ca-bundles\") pod \"c78cb8a1-ca8c-463e-af58-225ca77c241b\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.817387 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-proxy-ca-bundles\") pod \"c78cb8a1-ca8c-463e-af58-225ca77c241b\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.817441 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-container-storage-run\") pod \"c78cb8a1-ca8c-463e-af58-225ca77c241b\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.817532 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-system-configs\") pod \"c78cb8a1-ca8c-463e-af58-225ca77c241b\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.817566 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c78cb8a1-ca8c-463e-af58-225ca77c241b-node-pullsecrets\") pod \"c78cb8a1-ca8c-463e-af58-225ca77c241b\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.817638 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-container-storage-root\") pod \"c78cb8a1-ca8c-463e-af58-225ca77c241b\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.817681 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-blob-cache\") pod \"c78cb8a1-ca8c-463e-af58-225ca77c241b\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.817723 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/c78cb8a1-ca8c-463e-af58-225ca77c241b-builder-dockercfg-qfmzh-push\") pod \"c78cb8a1-ca8c-463e-af58-225ca77c241b\" (UID: \"c78cb8a1-ca8c-463e-af58-225ca77c241b\") " Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.817910 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c78cb8a1-ca8c-463e-af58-225ca77c241b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "c78cb8a1-ca8c-463e-af58-225ca77c241b" (UID: "c78cb8a1-ca8c-463e-af58-225ca77c241b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.818036 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c78cb8a1-ca8c-463e-af58-225ca77c241b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c78cb8a1-ca8c-463e-af58-225ca77c241b" (UID: "c78cb8a1-ca8c-463e-af58-225ca77c241b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.818449 4698 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c78cb8a1-ca8c-463e-af58-225ca77c241b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.818516 4698 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c78cb8a1-ca8c-463e-af58-225ca77c241b-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.819637 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "c78cb8a1-ca8c-463e-af58-225ca77c241b" (UID: "c78cb8a1-ca8c-463e-af58-225ca77c241b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.820026 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "c78cb8a1-ca8c-463e-af58-225ca77c241b" (UID: "c78cb8a1-ca8c-463e-af58-225ca77c241b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.820392 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "c78cb8a1-ca8c-463e-af58-225ca77c241b" (UID: "c78cb8a1-ca8c-463e-af58-225ca77c241b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.821310 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "c78cb8a1-ca8c-463e-af58-225ca77c241b" (UID: "c78cb8a1-ca8c-463e-af58-225ca77c241b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.827887 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c78cb8a1-ca8c-463e-af58-225ca77c241b-kube-api-access-smd7s" (OuterVolumeSpecName: "kube-api-access-smd7s") pod "c78cb8a1-ca8c-463e-af58-225ca77c241b" (UID: "c78cb8a1-ca8c-463e-af58-225ca77c241b"). InnerVolumeSpecName "kube-api-access-smd7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.828518 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c78cb8a1-ca8c-463e-af58-225ca77c241b-builder-dockercfg-qfmzh-pull" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-pull") pod "c78cb8a1-ca8c-463e-af58-225ca77c241b" (UID: "c78cb8a1-ca8c-463e-af58-225ca77c241b"). InnerVolumeSpecName "builder-dockercfg-qfmzh-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.838384 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c78cb8a1-ca8c-463e-af58-225ca77c241b-builder-dockercfg-qfmzh-push" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-push") pod "c78cb8a1-ca8c-463e-af58-225ca77c241b" (UID: "c78cb8a1-ca8c-463e-af58-225ca77c241b"). InnerVolumeSpecName "builder-dockercfg-qfmzh-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.880645 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "c78cb8a1-ca8c-463e-af58-225ca77c241b" (UID: "c78cb8a1-ca8c-463e-af58-225ca77c241b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.920265 4698 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.920323 4698 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.920342 4698 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.920360 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.920379 4698 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.920396 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/c78cb8a1-ca8c-463e-af58-225ca77c241b-builder-dockercfg-qfmzh-push\") on node \"crc\" DevicePath \"\"" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.920413 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/c78cb8a1-ca8c-463e-af58-225ca77c241b-builder-dockercfg-qfmzh-pull\") on node \"crc\" DevicePath \"\"" Feb 16 00:21:49 crc kubenswrapper[4698]: I0216 00:21:49.920429 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smd7s\" (UniqueName: \"kubernetes.io/projected/c78cb8a1-ca8c-463e-af58-225ca77c241b-kube-api-access-smd7s\") on node \"crc\" DevicePath \"\"" Feb 16 00:21:50 crc kubenswrapper[4698]: I0216 00:21:50.039024 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "c78cb8a1-ca8c-463e-af58-225ca77c241b" (UID: "c78cb8a1-ca8c-463e-af58-225ca77c241b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:21:50 crc kubenswrapper[4698]: I0216 00:21:50.123050 4698 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 16 00:21:50 crc kubenswrapper[4698]: I0216 00:21:50.365173 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"c78cb8a1-ca8c-463e-af58-225ca77c241b","Type":"ContainerDied","Data":"66f6551f4c21ab12855c7c243d9875c52843c40b37fc5f5c60e19d72d2b0c48d"} Feb 16 00:21:50 crc kubenswrapper[4698]: I0216 00:21:50.365559 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66f6551f4c21ab12855c7c243d9875c52843c40b37fc5f5c60e19d72d2b0c48d" Feb 16 00:21:50 crc kubenswrapper[4698]: I0216 00:21:50.365270 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 16 00:21:52 crc kubenswrapper[4698]: I0216 00:21:52.228067 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "c78cb8a1-ca8c-463e-af58-225ca77c241b" (UID: "c78cb8a1-ca8c-463e-af58-225ca77c241b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:21:52 crc kubenswrapper[4698]: I0216 00:21:52.258590 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c78cb8a1-ca8c-463e-af58-225ca77c241b-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 16 00:21:53 crc kubenswrapper[4698]: I0216 00:21:53.969474 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 16 00:21:53 crc kubenswrapper[4698]: E0216 00:21:53.969897 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6324abe0-286f-4e82-8f3a-f51caa528b3c" containerName="extract-content" Feb 16 00:21:53 crc kubenswrapper[4698]: I0216 00:21:53.969924 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6324abe0-286f-4e82-8f3a-f51caa528b3c" containerName="extract-content" Feb 16 00:21:53 crc kubenswrapper[4698]: E0216 00:21:53.969954 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c78cb8a1-ca8c-463e-af58-225ca77c241b" containerName="manage-dockerfile" Feb 16 00:21:53 crc kubenswrapper[4698]: I0216 00:21:53.969970 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c78cb8a1-ca8c-463e-af58-225ca77c241b" containerName="manage-dockerfile" Feb 16 00:21:53 crc kubenswrapper[4698]: E0216 00:21:53.970003 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6324abe0-286f-4e82-8f3a-f51caa528b3c" containerName="registry-server" Feb 16 00:21:53 crc kubenswrapper[4698]: I0216 00:21:53.970022 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6324abe0-286f-4e82-8f3a-f51caa528b3c" containerName="registry-server" Feb 16 00:21:53 crc kubenswrapper[4698]: E0216 00:21:53.970047 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c78cb8a1-ca8c-463e-af58-225ca77c241b" containerName="git-clone" Feb 16 00:21:53 crc kubenswrapper[4698]: I0216 00:21:53.970059 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c78cb8a1-ca8c-463e-af58-225ca77c241b" containerName="git-clone" Feb 16 00:21:53 crc kubenswrapper[4698]: E0216 00:21:53.970082 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6324abe0-286f-4e82-8f3a-f51caa528b3c" containerName="extract-utilities" Feb 16 00:21:53 crc kubenswrapper[4698]: I0216 00:21:53.970094 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6324abe0-286f-4e82-8f3a-f51caa528b3c" containerName="extract-utilities" Feb 16 00:21:53 crc kubenswrapper[4698]: E0216 00:21:53.970113 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c78cb8a1-ca8c-463e-af58-225ca77c241b" containerName="docker-build" Feb 16 00:21:53 crc kubenswrapper[4698]: I0216 00:21:53.970124 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c78cb8a1-ca8c-463e-af58-225ca77c241b" containerName="docker-build" Feb 16 00:21:53 crc kubenswrapper[4698]: I0216 00:21:53.970329 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="6324abe0-286f-4e82-8f3a-f51caa528b3c" containerName="registry-server" Feb 16 00:21:53 crc kubenswrapper[4698]: I0216 00:21:53.970352 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c78cb8a1-ca8c-463e-af58-225ca77c241b" containerName="docker-build" Feb 16 00:21:53 crc kubenswrapper[4698]: I0216 00:21:53.971745 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:53 crc kubenswrapper[4698]: I0216 00:21:53.976365 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-qfmzh" Feb 16 00:21:53 crc kubenswrapper[4698]: I0216 00:21:53.976854 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.162343 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.163319 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.168891 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.168964 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.169009 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.169038 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.169175 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgtmc\" (UniqueName: \"kubernetes.io/projected/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-kube-api-access-tgtmc\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.169207 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-builder-dockercfg-qfmzh-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.169237 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.169296 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.169326 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.169414 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.169447 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-builder-dockercfg-qfmzh-push\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.169546 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.171283 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.271667 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.272127 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.272160 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.272193 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.272233 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgtmc\" (UniqueName: \"kubernetes.io/projected/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-kube-api-access-tgtmc\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.272257 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-builder-dockercfg-qfmzh-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.272276 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.272298 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.272320 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.272341 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.272366 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-builder-dockercfg-qfmzh-push\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.272398 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.272420 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.273248 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.273752 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.273920 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.274181 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.274516 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.274532 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.274896 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.275046 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.283432 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-builder-dockercfg-qfmzh-push\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.285309 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-builder-dockercfg-qfmzh-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.299519 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgtmc\" (UniqueName: \"kubernetes.io/projected/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-kube-api-access-tgtmc\") pod \"smart-gateway-operator-1-build\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.473741 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:21:54 crc kubenswrapper[4698]: I0216 00:21:54.812868 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 16 00:21:55 crc kubenswrapper[4698]: I0216 00:21:55.405337 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae","Type":"ContainerStarted","Data":"4de62528f586578ffdde532e11ff1ee358af01ede231634a233b63a053c0c81c"} Feb 16 00:21:55 crc kubenswrapper[4698]: I0216 00:21:55.405435 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae","Type":"ContainerStarted","Data":"f726c2f7e18e88898f551675a68d07ddc3ae1ed0ab291cfb82fe3660f5796b43"} Feb 16 00:21:55 crc kubenswrapper[4698]: E0216 00:21:55.567390 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e2e9625_d9c0_47aa_8d2a_bfe3348822ae.slice/crio-conmon-4de62528f586578ffdde532e11ff1ee358af01ede231634a233b63a053c0c81c.scope\": RecentStats: unable to find data in memory cache]" Feb 16 00:21:56 crc kubenswrapper[4698]: I0216 00:21:56.414811 4698 generic.go:334] "Generic (PLEG): container finished" podID="1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" containerID="4de62528f586578ffdde532e11ff1ee358af01ede231634a233b63a053c0c81c" exitCode=0 Feb 16 00:21:56 crc kubenswrapper[4698]: I0216 00:21:56.414935 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae","Type":"ContainerDied","Data":"4de62528f586578ffdde532e11ff1ee358af01ede231634a233b63a053c0c81c"} Feb 16 00:21:57 crc kubenswrapper[4698]: I0216 00:21:57.045763 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:21:57 crc kubenswrapper[4698]: I0216 00:21:57.046060 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:21:57 crc kubenswrapper[4698]: I0216 00:21:57.431400 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae","Type":"ContainerStarted","Data":"594291420af0095771e5ac61e1ae1e135a9706d9ad92f50e1781a76a629f1da0"} Feb 16 00:21:57 crc kubenswrapper[4698]: I0216 00:21:57.473375 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=4.473340566 podStartE2EDuration="4.473340566s" podCreationTimestamp="2026-02-16 00:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:21:57.466610357 +0000 UTC m=+927.124509199" watchObservedRunningTime="2026-02-16 00:21:57.473340566 +0000 UTC m=+927.131239368" Feb 16 00:22:02 crc kubenswrapper[4698]: I0216 00:22:02.346350 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sv7wq"] Feb 16 00:22:02 crc kubenswrapper[4698]: I0216 00:22:02.348312 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sv7wq" Feb 16 00:22:02 crc kubenswrapper[4698]: I0216 00:22:02.364705 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sv7wq"] Feb 16 00:22:02 crc kubenswrapper[4698]: I0216 00:22:02.413288 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdr6s\" (UniqueName: \"kubernetes.io/projected/270590a1-def0-4937-b530-da92652ecacf-kube-api-access-zdr6s\") pod \"certified-operators-sv7wq\" (UID: \"270590a1-def0-4937-b530-da92652ecacf\") " pod="openshift-marketplace/certified-operators-sv7wq" Feb 16 00:22:02 crc kubenswrapper[4698]: I0216 00:22:02.413339 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/270590a1-def0-4937-b530-da92652ecacf-catalog-content\") pod \"certified-operators-sv7wq\" (UID: \"270590a1-def0-4937-b530-da92652ecacf\") " pod="openshift-marketplace/certified-operators-sv7wq" Feb 16 00:22:02 crc kubenswrapper[4698]: I0216 00:22:02.413372 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/270590a1-def0-4937-b530-da92652ecacf-utilities\") pod \"certified-operators-sv7wq\" (UID: \"270590a1-def0-4937-b530-da92652ecacf\") " pod="openshift-marketplace/certified-operators-sv7wq" Feb 16 00:22:02 crc kubenswrapper[4698]: I0216 00:22:02.514295 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdr6s\" (UniqueName: \"kubernetes.io/projected/270590a1-def0-4937-b530-da92652ecacf-kube-api-access-zdr6s\") pod \"certified-operators-sv7wq\" (UID: \"270590a1-def0-4937-b530-da92652ecacf\") " pod="openshift-marketplace/certified-operators-sv7wq" Feb 16 00:22:02 crc kubenswrapper[4698]: I0216 00:22:02.514346 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/270590a1-def0-4937-b530-da92652ecacf-catalog-content\") pod \"certified-operators-sv7wq\" (UID: \"270590a1-def0-4937-b530-da92652ecacf\") " pod="openshift-marketplace/certified-operators-sv7wq" Feb 16 00:22:02 crc kubenswrapper[4698]: I0216 00:22:02.514373 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/270590a1-def0-4937-b530-da92652ecacf-utilities\") pod \"certified-operators-sv7wq\" (UID: \"270590a1-def0-4937-b530-da92652ecacf\") " pod="openshift-marketplace/certified-operators-sv7wq" Feb 16 00:22:02 crc kubenswrapper[4698]: I0216 00:22:02.514858 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/270590a1-def0-4937-b530-da92652ecacf-utilities\") pod \"certified-operators-sv7wq\" (UID: \"270590a1-def0-4937-b530-da92652ecacf\") " pod="openshift-marketplace/certified-operators-sv7wq" Feb 16 00:22:02 crc kubenswrapper[4698]: I0216 00:22:02.515155 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/270590a1-def0-4937-b530-da92652ecacf-catalog-content\") pod \"certified-operators-sv7wq\" (UID: \"270590a1-def0-4937-b530-da92652ecacf\") " pod="openshift-marketplace/certified-operators-sv7wq" Feb 16 00:22:02 crc kubenswrapper[4698]: I0216 00:22:02.533890 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdr6s\" (UniqueName: \"kubernetes.io/projected/270590a1-def0-4937-b530-da92652ecacf-kube-api-access-zdr6s\") pod \"certified-operators-sv7wq\" (UID: \"270590a1-def0-4937-b530-da92652ecacf\") " pod="openshift-marketplace/certified-operators-sv7wq" Feb 16 00:22:02 crc kubenswrapper[4698]: I0216 00:22:02.666399 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sv7wq" Feb 16 00:22:02 crc kubenswrapper[4698]: I0216 00:22:02.921951 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sv7wq"] Feb 16 00:22:03 crc kubenswrapper[4698]: I0216 00:22:03.474061 4698 generic.go:334] "Generic (PLEG): container finished" podID="270590a1-def0-4937-b530-da92652ecacf" containerID="1a4f23eff4f3fdd1750c794014dbd75c4548934da53603b3da5d18cb96288531" exitCode=0 Feb 16 00:22:03 crc kubenswrapper[4698]: I0216 00:22:03.474259 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sv7wq" event={"ID":"270590a1-def0-4937-b530-da92652ecacf","Type":"ContainerDied","Data":"1a4f23eff4f3fdd1750c794014dbd75c4548934da53603b3da5d18cb96288531"} Feb 16 00:22:03 crc kubenswrapper[4698]: I0216 00:22:03.474678 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sv7wq" event={"ID":"270590a1-def0-4937-b530-da92652ecacf","Type":"ContainerStarted","Data":"8422297df4340461d0642d29ada84ed06aa7e4162bbdfdf7c25b37307c93bc08"} Feb 16 00:22:04 crc kubenswrapper[4698]: I0216 00:22:04.892322 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 16 00:22:04 crc kubenswrapper[4698]: I0216 00:22:04.892773 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" containerName="docker-build" containerID="cri-o://594291420af0095771e5ac61e1ae1e135a9706d9ad92f50e1781a76a629f1da0" gracePeriod=30 Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.497142 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_1e2e9625-d9c0-47aa-8d2a-bfe3348822ae/docker-build/0.log" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.498326 4698 generic.go:334] "Generic (PLEG): container finished" podID="1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" containerID="594291420af0095771e5ac61e1ae1e135a9706d9ad92f50e1781a76a629f1da0" exitCode=1 Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.498353 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae","Type":"ContainerDied","Data":"594291420af0095771e5ac61e1ae1e135a9706d9ad92f50e1781a76a629f1da0"} Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.522499 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.523947 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.528356 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-ca" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.531066 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-global-ca" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.531086 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-sys-config" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.571075 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.679572 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhqnb\" (UniqueName: \"kubernetes.io/projected/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-kube-api-access-jhqnb\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.679633 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.679653 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.679676 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.679835 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.679926 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-builder-dockercfg-qfmzh-push\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.679982 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-builder-dockercfg-qfmzh-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.680037 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.680085 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.680114 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.680135 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.680159 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.782242 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.782409 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-builder-dockercfg-qfmzh-push\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.782442 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.782510 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-builder-dockercfg-qfmzh-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.782589 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.782703 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.783216 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.783484 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.783758 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.783841 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.783956 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.783975 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.784106 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.784224 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhqnb\" (UniqueName: \"kubernetes.io/projected/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-kube-api-access-jhqnb\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.784308 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.784378 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.784473 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.785195 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.785927 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.786220 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.786597 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.788978 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-builder-dockercfg-qfmzh-push\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.803787 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-builder-dockercfg-qfmzh-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.810122 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhqnb\" (UniqueName: \"kubernetes.io/projected/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-kube-api-access-jhqnb\") pod \"smart-gateway-operator-2-build\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:06 crc kubenswrapper[4698]: I0216 00:22:06.840573 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.097210 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_1e2e9625-d9c0-47aa-8d2a-bfe3348822ae/docker-build/0.log" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.098142 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.128928 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Feb 16 00:22:07 crc kubenswrapper[4698]: W0216 00:22:07.132938 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99f3ff6c_9591_4319_b93c_c9a0863ce4d6.slice/crio-60fcd5cff00cc831f5e05eb953e0c0be6058c84a9e8306a38bd0b292ee2b916b WatchSource:0}: Error finding container 60fcd5cff00cc831f5e05eb953e0c0be6058c84a9e8306a38bd0b292ee2b916b: Status 404 returned error can't find the container with id 60fcd5cff00cc831f5e05eb953e0c0be6058c84a9e8306a38bd0b292ee2b916b Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.293146 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-buildcachedir\") pod \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.293504 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-blob-cache\") pod \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.293251 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" (UID: "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.293551 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-ca-bundles\") pod \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.293674 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgtmc\" (UniqueName: \"kubernetes.io/projected/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-kube-api-access-tgtmc\") pod \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.293709 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-container-storage-run\") pod \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.293802 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-container-storage-root\") pod \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.293850 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-system-configs\") pod \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.293894 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-buildworkdir\") pod \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.293926 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-node-pullsecrets\") pod \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.293947 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-proxy-ca-bundles\") pod \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.293964 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-builder-dockercfg-qfmzh-push\") pod \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.293986 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-builder-dockercfg-qfmzh-pull\") pod \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\" (UID: \"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae\") " Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.294393 4698 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.294449 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" (UID: "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.294444 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" (UID: "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.294670 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" (UID: "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.295513 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" (UID: "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.299007 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-builder-dockercfg-qfmzh-pull" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-pull") pod "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" (UID: "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae"). InnerVolumeSpecName "builder-dockercfg-qfmzh-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.299047 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-kube-api-access-tgtmc" (OuterVolumeSpecName: "kube-api-access-tgtmc") pod "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" (UID: "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae"). InnerVolumeSpecName "kube-api-access-tgtmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.300070 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-builder-dockercfg-qfmzh-push" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-push") pod "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" (UID: "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae"). InnerVolumeSpecName "builder-dockercfg-qfmzh-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.301303 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" (UID: "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.301862 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" (UID: "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.395799 4698 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.396592 4698 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.396652 4698 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.396668 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-builder-dockercfg-qfmzh-push\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.396678 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-builder-dockercfg-qfmzh-pull\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.396688 4698 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.396698 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgtmc\" (UniqueName: \"kubernetes.io/projected/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-kube-api-access-tgtmc\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.396708 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.396719 4698 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.467367 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" (UID: "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.499415 4698 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.545209 4698 generic.go:334] "Generic (PLEG): container finished" podID="270590a1-def0-4937-b530-da92652ecacf" containerID="ad5ee6b8ab4f0b5228d9bb32a4c49a7bcbd0f08a6c3e8843d7b4d6a3d9db299a" exitCode=0 Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.545298 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sv7wq" event={"ID":"270590a1-def0-4937-b530-da92652ecacf","Type":"ContainerDied","Data":"ad5ee6b8ab4f0b5228d9bb32a4c49a7bcbd0f08a6c3e8843d7b4d6a3d9db299a"} Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.575568 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_1e2e9625-d9c0-47aa-8d2a-bfe3348822ae/docker-build/0.log" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.576059 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"1e2e9625-d9c0-47aa-8d2a-bfe3348822ae","Type":"ContainerDied","Data":"f726c2f7e18e88898f551675a68d07ddc3ae1ed0ab291cfb82fe3660f5796b43"} Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.576103 4698 scope.go:117] "RemoveContainer" containerID="594291420af0095771e5ac61e1ae1e135a9706d9ad92f50e1781a76a629f1da0" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.576223 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.584540 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"99f3ff6c-9591-4319-b93c-c9a0863ce4d6","Type":"ContainerStarted","Data":"60fcd5cff00cc831f5e05eb953e0c0be6058c84a9e8306a38bd0b292ee2b916b"} Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.657448 4698 scope.go:117] "RemoveContainer" containerID="4de62528f586578ffdde532e11ff1ee358af01ede231634a233b63a053c0c81c" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.735132 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" (UID: "1e2e9625-d9c0-47aa-8d2a-bfe3348822ae"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.805912 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.923401 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 16 00:22:07 crc kubenswrapper[4698]: I0216 00:22:07.929094 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 16 00:22:08 crc kubenswrapper[4698]: I0216 00:22:08.592235 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"99f3ff6c-9591-4319-b93c-c9a0863ce4d6","Type":"ContainerStarted","Data":"dd673d8441a8b26650ca2eb9c4e9b2187338bc54d9d63db402d88d80066a4fcf"} Feb 16 00:22:08 crc kubenswrapper[4698]: E0216 00:22:08.955559 4698 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.143:48922->38.102.83.143:46243: read tcp 38.102.83.143:48922->38.102.83.143:46243: read: connection reset by peer Feb 16 00:22:09 crc kubenswrapper[4698]: I0216 00:22:09.239472 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" path="/var/lib/kubelet/pods/1e2e9625-d9c0-47aa-8d2a-bfe3348822ae/volumes" Feb 16 00:22:09 crc kubenswrapper[4698]: I0216 00:22:09.600795 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sv7wq" event={"ID":"270590a1-def0-4937-b530-da92652ecacf","Type":"ContainerStarted","Data":"b6e6fd8c81b5fdc92c17ba2522e0d54126bb3fb1dba6732d3af57745d35a60eb"} Feb 16 00:22:09 crc kubenswrapper[4698]: I0216 00:22:09.602260 4698 generic.go:334] "Generic (PLEG): container finished" podID="99f3ff6c-9591-4319-b93c-c9a0863ce4d6" containerID="dd673d8441a8b26650ca2eb9c4e9b2187338bc54d9d63db402d88d80066a4fcf" exitCode=0 Feb 16 00:22:09 crc kubenswrapper[4698]: I0216 00:22:09.602313 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"99f3ff6c-9591-4319-b93c-c9a0863ce4d6","Type":"ContainerDied","Data":"dd673d8441a8b26650ca2eb9c4e9b2187338bc54d9d63db402d88d80066a4fcf"} Feb 16 00:22:09 crc kubenswrapper[4698]: I0216 00:22:09.640948 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sv7wq" podStartSLOduration=2.704723268 podStartE2EDuration="7.640929341s" podCreationTimestamp="2026-02-16 00:22:02 +0000 UTC" firstStartedPulling="2026-02-16 00:22:03.477112138 +0000 UTC m=+933.135010900" lastFinishedPulling="2026-02-16 00:22:08.413318211 +0000 UTC m=+938.071216973" observedRunningTime="2026-02-16 00:22:09.638199956 +0000 UTC m=+939.296098718" watchObservedRunningTime="2026-02-16 00:22:09.640929341 +0000 UTC m=+939.298828103" Feb 16 00:22:10 crc kubenswrapper[4698]: I0216 00:22:10.610898 4698 generic.go:334] "Generic (PLEG): container finished" podID="99f3ff6c-9591-4319-b93c-c9a0863ce4d6" containerID="a8d22f30623022ea895cb820091d362d42a4ce99ac328b37a7054e48cf8cbb4a" exitCode=0 Feb 16 00:22:10 crc kubenswrapper[4698]: I0216 00:22:10.611030 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"99f3ff6c-9591-4319-b93c-c9a0863ce4d6","Type":"ContainerDied","Data":"a8d22f30623022ea895cb820091d362d42a4ce99ac328b37a7054e48cf8cbb4a"} Feb 16 00:22:10 crc kubenswrapper[4698]: I0216 00:22:10.655031 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_99f3ff6c-9591-4319-b93c-c9a0863ce4d6/manage-dockerfile/0.log" Feb 16 00:22:11 crc kubenswrapper[4698]: I0216 00:22:11.622661 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"99f3ff6c-9591-4319-b93c-c9a0863ce4d6","Type":"ContainerStarted","Data":"a18e4231ba4b9cec647d2f1c7df90f82846226a1f39a1bb600a64c10026cc3d7"} Feb 16 00:22:11 crc kubenswrapper[4698]: I0216 00:22:11.671998 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=5.6719720559999995 podStartE2EDuration="5.671972056s" podCreationTimestamp="2026-02-16 00:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:22:11.664576575 +0000 UTC m=+941.322475347" watchObservedRunningTime="2026-02-16 00:22:11.671972056 +0000 UTC m=+941.329870858" Feb 16 00:22:12 crc kubenswrapper[4698]: I0216 00:22:12.667091 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sv7wq" Feb 16 00:22:12 crc kubenswrapper[4698]: I0216 00:22:12.667921 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sv7wq" Feb 16 00:22:12 crc kubenswrapper[4698]: I0216 00:22:12.722651 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sv7wq" Feb 16 00:22:13 crc kubenswrapper[4698]: I0216 00:22:13.686923 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sv7wq" Feb 16 00:22:13 crc kubenswrapper[4698]: I0216 00:22:13.750102 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sv7wq"] Feb 16 00:22:15 crc kubenswrapper[4698]: I0216 00:22:15.653515 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sv7wq" podUID="270590a1-def0-4937-b530-da92652ecacf" containerName="registry-server" containerID="cri-o://b6e6fd8c81b5fdc92c17ba2522e0d54126bb3fb1dba6732d3af57745d35a60eb" gracePeriod=2 Feb 16 00:22:16 crc kubenswrapper[4698]: I0216 00:22:16.672227 4698 generic.go:334] "Generic (PLEG): container finished" podID="270590a1-def0-4937-b530-da92652ecacf" containerID="b6e6fd8c81b5fdc92c17ba2522e0d54126bb3fb1dba6732d3af57745d35a60eb" exitCode=0 Feb 16 00:22:16 crc kubenswrapper[4698]: I0216 00:22:16.672260 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sv7wq" event={"ID":"270590a1-def0-4937-b530-da92652ecacf","Type":"ContainerDied","Data":"b6e6fd8c81b5fdc92c17ba2522e0d54126bb3fb1dba6732d3af57745d35a60eb"} Feb 16 00:22:16 crc kubenswrapper[4698]: I0216 00:22:16.758202 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sv7wq" Feb 16 00:22:16 crc kubenswrapper[4698]: I0216 00:22:16.846531 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/270590a1-def0-4937-b530-da92652ecacf-utilities\") pod \"270590a1-def0-4937-b530-da92652ecacf\" (UID: \"270590a1-def0-4937-b530-da92652ecacf\") " Feb 16 00:22:16 crc kubenswrapper[4698]: I0216 00:22:16.846607 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/270590a1-def0-4937-b530-da92652ecacf-catalog-content\") pod \"270590a1-def0-4937-b530-da92652ecacf\" (UID: \"270590a1-def0-4937-b530-da92652ecacf\") " Feb 16 00:22:16 crc kubenswrapper[4698]: I0216 00:22:16.846702 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdr6s\" (UniqueName: \"kubernetes.io/projected/270590a1-def0-4937-b530-da92652ecacf-kube-api-access-zdr6s\") pod \"270590a1-def0-4937-b530-da92652ecacf\" (UID: \"270590a1-def0-4937-b530-da92652ecacf\") " Feb 16 00:22:16 crc kubenswrapper[4698]: I0216 00:22:16.855364 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/270590a1-def0-4937-b530-da92652ecacf-kube-api-access-zdr6s" (OuterVolumeSpecName: "kube-api-access-zdr6s") pod "270590a1-def0-4937-b530-da92652ecacf" (UID: "270590a1-def0-4937-b530-da92652ecacf"). InnerVolumeSpecName "kube-api-access-zdr6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:22:16 crc kubenswrapper[4698]: I0216 00:22:16.860713 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/270590a1-def0-4937-b530-da92652ecacf-utilities" (OuterVolumeSpecName: "utilities") pod "270590a1-def0-4937-b530-da92652ecacf" (UID: "270590a1-def0-4937-b530-da92652ecacf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:22:16 crc kubenswrapper[4698]: I0216 00:22:16.928455 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/270590a1-def0-4937-b530-da92652ecacf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "270590a1-def0-4937-b530-da92652ecacf" (UID: "270590a1-def0-4937-b530-da92652ecacf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:22:16 crc kubenswrapper[4698]: I0216 00:22:16.948333 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/270590a1-def0-4937-b530-da92652ecacf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:16 crc kubenswrapper[4698]: I0216 00:22:16.948368 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdr6s\" (UniqueName: \"kubernetes.io/projected/270590a1-def0-4937-b530-da92652ecacf-kube-api-access-zdr6s\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:16 crc kubenswrapper[4698]: I0216 00:22:16.948380 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/270590a1-def0-4937-b530-da92652ecacf-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:17 crc kubenswrapper[4698]: I0216 00:22:17.680846 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sv7wq" event={"ID":"270590a1-def0-4937-b530-da92652ecacf","Type":"ContainerDied","Data":"8422297df4340461d0642d29ada84ed06aa7e4162bbdfdf7c25b37307c93bc08"} Feb 16 00:22:17 crc kubenswrapper[4698]: I0216 00:22:17.680906 4698 scope.go:117] "RemoveContainer" containerID="b6e6fd8c81b5fdc92c17ba2522e0d54126bb3fb1dba6732d3af57745d35a60eb" Feb 16 00:22:17 crc kubenswrapper[4698]: I0216 00:22:17.681801 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sv7wq" Feb 16 00:22:17 crc kubenswrapper[4698]: I0216 00:22:17.700062 4698 scope.go:117] "RemoveContainer" containerID="ad5ee6b8ab4f0b5228d9bb32a4c49a7bcbd0f08a6c3e8843d7b4d6a3d9db299a" Feb 16 00:22:17 crc kubenswrapper[4698]: I0216 00:22:17.705787 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sv7wq"] Feb 16 00:22:17 crc kubenswrapper[4698]: I0216 00:22:17.707570 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sv7wq"] Feb 16 00:22:17 crc kubenswrapper[4698]: I0216 00:22:17.718784 4698 scope.go:117] "RemoveContainer" containerID="1a4f23eff4f3fdd1750c794014dbd75c4548934da53603b3da5d18cb96288531" Feb 16 00:22:19 crc kubenswrapper[4698]: I0216 00:22:19.243789 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="270590a1-def0-4937-b530-da92652ecacf" path="/var/lib/kubelet/pods/270590a1-def0-4937-b530-da92652ecacf/volumes" Feb 16 00:22:27 crc kubenswrapper[4698]: I0216 00:22:27.045817 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:22:27 crc kubenswrapper[4698]: I0216 00:22:27.046466 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:22:27 crc kubenswrapper[4698]: I0216 00:22:27.046525 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:22:27 crc kubenswrapper[4698]: I0216 00:22:27.047308 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d6541b3cb76f710dedaef0b85b0e104e861ef72466cd38ea058959a35248ef97"} pod="openshift-machine-config-operator/machine-config-daemon-z56m2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 00:22:27 crc kubenswrapper[4698]: I0216 00:22:27.047372 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" containerID="cri-o://d6541b3cb76f710dedaef0b85b0e104e861ef72466cd38ea058959a35248ef97" gracePeriod=600 Feb 16 00:22:27 crc kubenswrapper[4698]: I0216 00:22:27.753641 4698 generic.go:334] "Generic (PLEG): container finished" podID="7b351654-277f-4d0d-84f9-b003f934936c" containerID="d6541b3cb76f710dedaef0b85b0e104e861ef72466cd38ea058959a35248ef97" exitCode=0 Feb 16 00:22:27 crc kubenswrapper[4698]: I0216 00:22:27.753759 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" event={"ID":"7b351654-277f-4d0d-84f9-b003f934936c","Type":"ContainerDied","Data":"d6541b3cb76f710dedaef0b85b0e104e861ef72466cd38ea058959a35248ef97"} Feb 16 00:22:27 crc kubenswrapper[4698]: I0216 00:22:27.754324 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" event={"ID":"7b351654-277f-4d0d-84f9-b003f934936c","Type":"ContainerStarted","Data":"39d1b91146f7648e212a74b13f222a511f40648ab7fc41107476755783be4ff2"} Feb 16 00:22:27 crc kubenswrapper[4698]: I0216 00:22:27.754374 4698 scope.go:117] "RemoveContainer" containerID="28455df6b45ac3d964cdd4d7f6adb7fb0a6e0a48a0dcb629da0d78838dbdbdad" Feb 16 00:23:25 crc kubenswrapper[4698]: I0216 00:23:25.183716 4698 generic.go:334] "Generic (PLEG): container finished" podID="99f3ff6c-9591-4319-b93c-c9a0863ce4d6" containerID="a18e4231ba4b9cec647d2f1c7df90f82846226a1f39a1bb600a64c10026cc3d7" exitCode=0 Feb 16 00:23:25 crc kubenswrapper[4698]: I0216 00:23:25.183908 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"99f3ff6c-9591-4319-b93c-c9a0863ce4d6","Type":"ContainerDied","Data":"a18e4231ba4b9cec647d2f1c7df90f82846226a1f39a1bb600a64c10026cc3d7"} Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.499361 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.678695 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-builder-dockercfg-qfmzh-push\") pod \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.678858 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-container-storage-root\") pod \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.678932 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-buildcachedir\") pod \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.679000 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-proxy-ca-bundles\") pod \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.679092 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-system-configs\") pod \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.679144 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-node-pullsecrets\") pod \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.679211 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhqnb\" (UniqueName: \"kubernetes.io/projected/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-kube-api-access-jhqnb\") pod \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.679285 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-blob-cache\") pod \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.679334 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-buildworkdir\") pod \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.679393 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-container-storage-run\") pod \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.679430 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-ca-bundles\") pod \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.679509 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-builder-dockercfg-qfmzh-pull\") pod \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\" (UID: \"99f3ff6c-9591-4319-b93c-c9a0863ce4d6\") " Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.680880 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "99f3ff6c-9591-4319-b93c-c9a0863ce4d6" (UID: "99f3ff6c-9591-4319-b93c-c9a0863ce4d6"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.680983 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "99f3ff6c-9591-4319-b93c-c9a0863ce4d6" (UID: "99f3ff6c-9591-4319-b93c-c9a0863ce4d6"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.681006 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "99f3ff6c-9591-4319-b93c-c9a0863ce4d6" (UID: "99f3ff6c-9591-4319-b93c-c9a0863ce4d6"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.681051 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "99f3ff6c-9591-4319-b93c-c9a0863ce4d6" (UID: "99f3ff6c-9591-4319-b93c-c9a0863ce4d6"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.681736 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "99f3ff6c-9591-4319-b93c-c9a0863ce4d6" (UID: "99f3ff6c-9591-4319-b93c-c9a0863ce4d6"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.682017 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "99f3ff6c-9591-4319-b93c-c9a0863ce4d6" (UID: "99f3ff6c-9591-4319-b93c-c9a0863ce4d6"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.685848 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "99f3ff6c-9591-4319-b93c-c9a0863ce4d6" (UID: "99f3ff6c-9591-4319-b93c-c9a0863ce4d6"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.686262 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-builder-dockercfg-qfmzh-push" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-push") pod "99f3ff6c-9591-4319-b93c-c9a0863ce4d6" (UID: "99f3ff6c-9591-4319-b93c-c9a0863ce4d6"). InnerVolumeSpecName "builder-dockercfg-qfmzh-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.686524 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-builder-dockercfg-qfmzh-pull" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-pull") pod "99f3ff6c-9591-4319-b93c-c9a0863ce4d6" (UID: "99f3ff6c-9591-4319-b93c-c9a0863ce4d6"). InnerVolumeSpecName "builder-dockercfg-qfmzh-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.688763 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-kube-api-access-jhqnb" (OuterVolumeSpecName: "kube-api-access-jhqnb") pod "99f3ff6c-9591-4319-b93c-c9a0863ce4d6" (UID: "99f3ff6c-9591-4319-b93c-c9a0863ce4d6"). InnerVolumeSpecName "kube-api-access-jhqnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.782003 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-builder-dockercfg-qfmzh-pull\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.782054 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-builder-dockercfg-qfmzh-push\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.782074 4698 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.782091 4698 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.782109 4698 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.782125 4698 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.782143 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhqnb\" (UniqueName: \"kubernetes.io/projected/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-kube-api-access-jhqnb\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.782161 4698 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.782179 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.782195 4698 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.908825 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "99f3ff6c-9591-4319-b93c-c9a0863ce4d6" (UID: "99f3ff6c-9591-4319-b93c-c9a0863ce4d6"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:23:26 crc kubenswrapper[4698]: I0216 00:23:26.983879 4698 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:27 crc kubenswrapper[4698]: I0216 00:23:27.202047 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"99f3ff6c-9591-4319-b93c-c9a0863ce4d6","Type":"ContainerDied","Data":"60fcd5cff00cc831f5e05eb953e0c0be6058c84a9e8306a38bd0b292ee2b916b"} Feb 16 00:23:27 crc kubenswrapper[4698]: I0216 00:23:27.202110 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60fcd5cff00cc831f5e05eb953e0c0be6058c84a9e8306a38bd0b292ee2b916b" Feb 16 00:23:27 crc kubenswrapper[4698]: I0216 00:23:27.202077 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 16 00:23:29 crc kubenswrapper[4698]: I0216 00:23:29.023954 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "99f3ff6c-9591-4319-b93c-c9a0863ce4d6" (UID: "99f3ff6c-9591-4319-b93c-c9a0863ce4d6"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:23:29 crc kubenswrapper[4698]: I0216 00:23:29.119320 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/99f3ff6c-9591-4319-b93c-c9a0863ce4d6-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.018424 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 16 00:23:31 crc kubenswrapper[4698]: E0216 00:23:31.018912 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f3ff6c-9591-4319-b93c-c9a0863ce4d6" containerName="manage-dockerfile" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.018939 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f3ff6c-9591-4319-b93c-c9a0863ce4d6" containerName="manage-dockerfile" Feb 16 00:23:31 crc kubenswrapper[4698]: E0216 00:23:31.018958 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="270590a1-def0-4937-b530-da92652ecacf" containerName="extract-content" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.018974 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="270590a1-def0-4937-b530-da92652ecacf" containerName="extract-content" Feb 16 00:23:31 crc kubenswrapper[4698]: E0216 00:23:31.019006 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f3ff6c-9591-4319-b93c-c9a0863ce4d6" containerName="docker-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.019021 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f3ff6c-9591-4319-b93c-c9a0863ce4d6" containerName="docker-build" Feb 16 00:23:31 crc kubenswrapper[4698]: E0216 00:23:31.019043 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="270590a1-def0-4937-b530-da92652ecacf" containerName="extract-utilities" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.019057 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="270590a1-def0-4937-b530-da92652ecacf" containerName="extract-utilities" Feb 16 00:23:31 crc kubenswrapper[4698]: E0216 00:23:31.019080 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f3ff6c-9591-4319-b93c-c9a0863ce4d6" containerName="git-clone" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.019092 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f3ff6c-9591-4319-b93c-c9a0863ce4d6" containerName="git-clone" Feb 16 00:23:31 crc kubenswrapper[4698]: E0216 00:23:31.019117 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" containerName="docker-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.019130 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" containerName="docker-build" Feb 16 00:23:31 crc kubenswrapper[4698]: E0216 00:23:31.019146 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" containerName="manage-dockerfile" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.019158 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" containerName="manage-dockerfile" Feb 16 00:23:31 crc kubenswrapper[4698]: E0216 00:23:31.019179 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="270590a1-def0-4937-b530-da92652ecacf" containerName="registry-server" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.019192 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="270590a1-def0-4937-b530-da92652ecacf" containerName="registry-server" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.019382 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e2e9625-d9c0-47aa-8d2a-bfe3348822ae" containerName="docker-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.019407 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="270590a1-def0-4937-b530-da92652ecacf" containerName="registry-server" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.019435 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f3ff6c-9591-4319-b93c-c9a0863ce4d6" containerName="docker-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.020953 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.024740 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-ca" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.025409 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-sys-config" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.025492 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-qfmzh" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.026597 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-global-ca" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.050576 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.077469 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-builder-dockercfg-qfmzh-push\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.077607 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-builder-dockercfg-qfmzh-pull\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.077695 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcn9n\" (UniqueName: \"kubernetes.io/projected/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-kube-api-access-dcn9n\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.077840 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.077942 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-buildcachedir\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.077986 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.078027 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.078050 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-container-storage-root\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.078072 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-container-storage-run\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.078120 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-system-configs\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.078215 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.078262 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-buildworkdir\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.180024 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcn9n\" (UniqueName: \"kubernetes.io/projected/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-kube-api-access-dcn9n\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.180100 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.180189 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-buildcachedir\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.180673 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-buildcachedir\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.180756 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.180895 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.180982 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-container-storage-root\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.181508 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.181546 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-container-storage-root\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.181386 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.181670 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-container-storage-run\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.181738 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-system-configs\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.181799 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.181839 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-buildworkdir\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.181898 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-builder-dockercfg-qfmzh-push\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.181924 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-builder-dockercfg-qfmzh-pull\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.182383 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-container-storage-run\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.182474 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-buildworkdir\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.183106 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-global-ca" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.184089 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-qfmzh" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.184702 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-sys-config" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.185267 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-ca" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.195406 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.195885 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-system-configs\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.196570 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.200249 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-builder-dockercfg-qfmzh-pull\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.200592 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-builder-dockercfg-qfmzh-push\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.201370 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcn9n\" (UniqueName: \"kubernetes.io/projected/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-kube-api-access-dcn9n\") pod \"sg-core-1-build\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.383591 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 16 00:23:31 crc kubenswrapper[4698]: I0216 00:23:31.657167 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 16 00:23:32 crc kubenswrapper[4698]: I0216 00:23:32.256318 4698 generic.go:334] "Generic (PLEG): container finished" podID="aa1a8fc5-b5e1-4271-9632-861b6d0c601d" containerID="5a1788dc213e3176220e25167aafd5fee7d1d572d30ac1eb3c76c36be88f2063" exitCode=0 Feb 16 00:23:32 crc kubenswrapper[4698]: I0216 00:23:32.256398 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"aa1a8fc5-b5e1-4271-9632-861b6d0c601d","Type":"ContainerDied","Data":"5a1788dc213e3176220e25167aafd5fee7d1d572d30ac1eb3c76c36be88f2063"} Feb 16 00:23:32 crc kubenswrapper[4698]: I0216 00:23:32.256779 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"aa1a8fc5-b5e1-4271-9632-861b6d0c601d","Type":"ContainerStarted","Data":"35f8edc037d213fa9c3f80bcec56f5b14691b5438776a7e266925892dd8ab521"} Feb 16 00:23:33 crc kubenswrapper[4698]: I0216 00:23:33.275521 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"aa1a8fc5-b5e1-4271-9632-861b6d0c601d","Type":"ContainerStarted","Data":"572a874b7ade9594b6e1e1759ade8d2edd94d167438991a1de4fccc56e8a60de"} Feb 16 00:23:33 crc kubenswrapper[4698]: I0216 00:23:33.324499 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=3.324470427 podStartE2EDuration="3.324470427s" podCreationTimestamp="2026-02-16 00:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:23:33.317652525 +0000 UTC m=+1022.975551317" watchObservedRunningTime="2026-02-16 00:23:33.324470427 +0000 UTC m=+1022.982369219" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.355950 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.356914 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="aa1a8fc5-b5e1-4271-9632-861b6d0c601d" containerName="docker-build" containerID="cri-o://572a874b7ade9594b6e1e1759ade8d2edd94d167438991a1de4fccc56e8a60de" gracePeriod=30 Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.715167 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_aa1a8fc5-b5e1-4271-9632-861b6d0c601d/docker-build/0.log" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.715660 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.846697 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcn9n\" (UniqueName: \"kubernetes.io/projected/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-kube-api-access-dcn9n\") pod \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.846747 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-container-storage-root\") pod \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.846813 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-node-pullsecrets\") pod \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.847013 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "aa1a8fc5-b5e1-4271-9632-861b6d0c601d" (UID: "aa1a8fc5-b5e1-4271-9632-861b6d0c601d"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.848817 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-buildcachedir\") pod \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.849139 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-system-configs\") pod \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.849180 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-proxy-ca-bundles\") pod \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.849250 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-container-storage-run\") pod \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.849292 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-blob-cache\") pod \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.849322 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-ca-bundles\") pod \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.849352 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-buildworkdir\") pod \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.849390 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-builder-dockercfg-qfmzh-push\") pod \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.849429 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-builder-dockercfg-qfmzh-pull\") pod \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\" (UID: \"aa1a8fc5-b5e1-4271-9632-861b6d0c601d\") " Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.849839 4698 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.849927 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "aa1a8fc5-b5e1-4271-9632-861b6d0c601d" (UID: "aa1a8fc5-b5e1-4271-9632-861b6d0c601d"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.850096 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "aa1a8fc5-b5e1-4271-9632-861b6d0c601d" (UID: "aa1a8fc5-b5e1-4271-9632-861b6d0c601d"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.850266 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "aa1a8fc5-b5e1-4271-9632-861b6d0c601d" (UID: "aa1a8fc5-b5e1-4271-9632-861b6d0c601d"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.851906 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "aa1a8fc5-b5e1-4271-9632-861b6d0c601d" (UID: "aa1a8fc5-b5e1-4271-9632-861b6d0c601d"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.852087 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "aa1a8fc5-b5e1-4271-9632-861b6d0c601d" (UID: "aa1a8fc5-b5e1-4271-9632-861b6d0c601d"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.854730 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "aa1a8fc5-b5e1-4271-9632-861b6d0c601d" (UID: "aa1a8fc5-b5e1-4271-9632-861b6d0c601d"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.857933 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-builder-dockercfg-qfmzh-push" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-push") pod "aa1a8fc5-b5e1-4271-9632-861b6d0c601d" (UID: "aa1a8fc5-b5e1-4271-9632-861b6d0c601d"). InnerVolumeSpecName "builder-dockercfg-qfmzh-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.858071 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-builder-dockercfg-qfmzh-pull" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-pull") pod "aa1a8fc5-b5e1-4271-9632-861b6d0c601d" (UID: "aa1a8fc5-b5e1-4271-9632-861b6d0c601d"). InnerVolumeSpecName "builder-dockercfg-qfmzh-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.858256 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-kube-api-access-dcn9n" (OuterVolumeSpecName: "kube-api-access-dcn9n") pod "aa1a8fc5-b5e1-4271-9632-861b6d0c601d" (UID: "aa1a8fc5-b5e1-4271-9632-861b6d0c601d"). InnerVolumeSpecName "kube-api-access-dcn9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.950908 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "aa1a8fc5-b5e1-4271-9632-861b6d0c601d" (UID: "aa1a8fc5-b5e1-4271-9632-861b6d0c601d"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.964454 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcn9n\" (UniqueName: \"kubernetes.io/projected/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-kube-api-access-dcn9n\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.964556 4698 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.964584 4698 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.964601 4698 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.964633 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.964648 4698 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.964663 4698 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.964678 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-builder-dockercfg-qfmzh-push\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:41 crc kubenswrapper[4698]: I0216 00:23:41.964689 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-builder-dockercfg-qfmzh-pull\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:42 crc kubenswrapper[4698]: I0216 00:23:42.003459 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "aa1a8fc5-b5e1-4271-9632-861b6d0c601d" (UID: "aa1a8fc5-b5e1-4271-9632-861b6d0c601d"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:23:42 crc kubenswrapper[4698]: I0216 00:23:42.066090 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:42 crc kubenswrapper[4698]: I0216 00:23:42.066131 4698 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aa1a8fc5-b5e1-4271-9632-861b6d0c601d-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:42 crc kubenswrapper[4698]: I0216 00:23:42.336542 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_aa1a8fc5-b5e1-4271-9632-861b6d0c601d/docker-build/0.log" Feb 16 00:23:42 crc kubenswrapper[4698]: I0216 00:23:42.336851 4698 generic.go:334] "Generic (PLEG): container finished" podID="aa1a8fc5-b5e1-4271-9632-861b6d0c601d" containerID="572a874b7ade9594b6e1e1759ade8d2edd94d167438991a1de4fccc56e8a60de" exitCode=1 Feb 16 00:23:42 crc kubenswrapper[4698]: I0216 00:23:42.336885 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"aa1a8fc5-b5e1-4271-9632-861b6d0c601d","Type":"ContainerDied","Data":"572a874b7ade9594b6e1e1759ade8d2edd94d167438991a1de4fccc56e8a60de"} Feb 16 00:23:42 crc kubenswrapper[4698]: I0216 00:23:42.336912 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"aa1a8fc5-b5e1-4271-9632-861b6d0c601d","Type":"ContainerDied","Data":"35f8edc037d213fa9c3f80bcec56f5b14691b5438776a7e266925892dd8ab521"} Feb 16 00:23:42 crc kubenswrapper[4698]: I0216 00:23:42.336933 4698 scope.go:117] "RemoveContainer" containerID="572a874b7ade9594b6e1e1759ade8d2edd94d167438991a1de4fccc56e8a60de" Feb 16 00:23:42 crc kubenswrapper[4698]: I0216 00:23:42.337072 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 16 00:23:42 crc kubenswrapper[4698]: I0216 00:23:42.389359 4698 scope.go:117] "RemoveContainer" containerID="5a1788dc213e3176220e25167aafd5fee7d1d572d30ac1eb3c76c36be88f2063" Feb 16 00:23:42 crc kubenswrapper[4698]: I0216 00:23:42.397234 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 16 00:23:42 crc kubenswrapper[4698]: I0216 00:23:42.405040 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 16 00:23:42 crc kubenswrapper[4698]: I0216 00:23:42.419775 4698 scope.go:117] "RemoveContainer" containerID="572a874b7ade9594b6e1e1759ade8d2edd94d167438991a1de4fccc56e8a60de" Feb 16 00:23:42 crc kubenswrapper[4698]: E0216 00:23:42.420228 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"572a874b7ade9594b6e1e1759ade8d2edd94d167438991a1de4fccc56e8a60de\": container with ID starting with 572a874b7ade9594b6e1e1759ade8d2edd94d167438991a1de4fccc56e8a60de not found: ID does not exist" containerID="572a874b7ade9594b6e1e1759ade8d2edd94d167438991a1de4fccc56e8a60de" Feb 16 00:23:42 crc kubenswrapper[4698]: I0216 00:23:42.420267 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"572a874b7ade9594b6e1e1759ade8d2edd94d167438991a1de4fccc56e8a60de"} err="failed to get container status \"572a874b7ade9594b6e1e1759ade8d2edd94d167438991a1de4fccc56e8a60de\": rpc error: code = NotFound desc = could not find container \"572a874b7ade9594b6e1e1759ade8d2edd94d167438991a1de4fccc56e8a60de\": container with ID starting with 572a874b7ade9594b6e1e1759ade8d2edd94d167438991a1de4fccc56e8a60de not found: ID does not exist" Feb 16 00:23:42 crc kubenswrapper[4698]: I0216 00:23:42.420301 4698 scope.go:117] "RemoveContainer" containerID="5a1788dc213e3176220e25167aafd5fee7d1d572d30ac1eb3c76c36be88f2063" Feb 16 00:23:42 crc kubenswrapper[4698]: E0216 00:23:42.420583 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a1788dc213e3176220e25167aafd5fee7d1d572d30ac1eb3c76c36be88f2063\": container with ID starting with 5a1788dc213e3176220e25167aafd5fee7d1d572d30ac1eb3c76c36be88f2063 not found: ID does not exist" containerID="5a1788dc213e3176220e25167aafd5fee7d1d572d30ac1eb3c76c36be88f2063" Feb 16 00:23:42 crc kubenswrapper[4698]: I0216 00:23:42.420631 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a1788dc213e3176220e25167aafd5fee7d1d572d30ac1eb3c76c36be88f2063"} err="failed to get container status \"5a1788dc213e3176220e25167aafd5fee7d1d572d30ac1eb3c76c36be88f2063\": rpc error: code = NotFound desc = could not find container \"5a1788dc213e3176220e25167aafd5fee7d1d572d30ac1eb3c76c36be88f2063\": container with ID starting with 5a1788dc213e3176220e25167aafd5fee7d1d572d30ac1eb3c76c36be88f2063 not found: ID does not exist" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.072227 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Feb 16 00:23:43 crc kubenswrapper[4698]: E0216 00:23:43.072728 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa1a8fc5-b5e1-4271-9632-861b6d0c601d" containerName="manage-dockerfile" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.072781 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa1a8fc5-b5e1-4271-9632-861b6d0c601d" containerName="manage-dockerfile" Feb 16 00:23:43 crc kubenswrapper[4698]: E0216 00:23:43.072822 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa1a8fc5-b5e1-4271-9632-861b6d0c601d" containerName="docker-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.072836 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa1a8fc5-b5e1-4271-9632-861b6d0c601d" containerName="docker-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.073195 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa1a8fc5-b5e1-4271-9632-861b6d0c601d" containerName="docker-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.074953 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.078568 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-ca" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.078567 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-sys-config" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.084574 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-global-ca" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.084590 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-qfmzh" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.113328 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.186326 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6151be4a-761e-47a9-96d3-4dab281e7670-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.186829 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-buildworkdir\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.187008 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.187187 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6151be4a-761e-47a9-96d3-4dab281e7670-buildcachedir\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.187364 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-container-storage-run\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.187554 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6151be4a-761e-47a9-96d3-4dab281e7670-build-system-configs\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.187956 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6151be4a-761e-47a9-96d3-4dab281e7670-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.188144 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/6151be4a-761e-47a9-96d3-4dab281e7670-builder-dockercfg-qfmzh-pull\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.188302 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6151be4a-761e-47a9-96d3-4dab281e7670-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.188490 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqjps\" (UniqueName: \"kubernetes.io/projected/6151be4a-761e-47a9-96d3-4dab281e7670-kube-api-access-hqjps\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.188738 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/6151be4a-761e-47a9-96d3-4dab281e7670-builder-dockercfg-qfmzh-push\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.188934 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-container-storage-root\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.244018 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa1a8fc5-b5e1-4271-9632-861b6d0c601d" path="/var/lib/kubelet/pods/aa1a8fc5-b5e1-4271-9632-861b6d0c601d/volumes" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.290568 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6151be4a-761e-47a9-96d3-4dab281e7670-buildcachedir\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.290671 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-container-storage-run\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.290710 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6151be4a-761e-47a9-96d3-4dab281e7670-buildcachedir\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.290738 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6151be4a-761e-47a9-96d3-4dab281e7670-build-system-configs\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.290902 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6151be4a-761e-47a9-96d3-4dab281e7670-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.290971 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/6151be4a-761e-47a9-96d3-4dab281e7670-builder-dockercfg-qfmzh-pull\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.291014 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6151be4a-761e-47a9-96d3-4dab281e7670-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.291128 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqjps\" (UniqueName: \"kubernetes.io/projected/6151be4a-761e-47a9-96d3-4dab281e7670-kube-api-access-hqjps\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.291280 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/6151be4a-761e-47a9-96d3-4dab281e7670-builder-dockercfg-qfmzh-push\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.291361 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-container-storage-root\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.291436 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6151be4a-761e-47a9-96d3-4dab281e7670-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.291477 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-buildworkdir\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.291515 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.291553 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6151be4a-761e-47a9-96d3-4dab281e7670-build-system-configs\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.291900 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-container-storage-run\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.292213 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.292309 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6151be4a-761e-47a9-96d3-4dab281e7670-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.293328 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-container-storage-root\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.293500 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6151be4a-761e-47a9-96d3-4dab281e7670-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.293829 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-buildworkdir\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.294465 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6151be4a-761e-47a9-96d3-4dab281e7670-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.299458 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/6151be4a-761e-47a9-96d3-4dab281e7670-builder-dockercfg-qfmzh-push\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.303376 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/6151be4a-761e-47a9-96d3-4dab281e7670-builder-dockercfg-qfmzh-pull\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.329867 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqjps\" (UniqueName: \"kubernetes.io/projected/6151be4a-761e-47a9-96d3-4dab281e7670-kube-api-access-hqjps\") pod \"sg-core-2-build\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.405326 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 16 00:23:43 crc kubenswrapper[4698]: I0216 00:23:43.666979 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Feb 16 00:23:44 crc kubenswrapper[4698]: I0216 00:23:44.357243 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6151be4a-761e-47a9-96d3-4dab281e7670","Type":"ContainerStarted","Data":"5033cb72335596db47242928e7c3a6845ae0f1425524f4b49c3291658112c606"} Feb 16 00:23:44 crc kubenswrapper[4698]: I0216 00:23:44.357642 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6151be4a-761e-47a9-96d3-4dab281e7670","Type":"ContainerStarted","Data":"fae79e2276aa15c4d13c00a7ae689faf9df9f76611035872a126f1437cbd6c02"} Feb 16 00:23:45 crc kubenswrapper[4698]: I0216 00:23:45.366969 4698 generic.go:334] "Generic (PLEG): container finished" podID="6151be4a-761e-47a9-96d3-4dab281e7670" containerID="5033cb72335596db47242928e7c3a6845ae0f1425524f4b49c3291658112c606" exitCode=0 Feb 16 00:23:45 crc kubenswrapper[4698]: I0216 00:23:45.367021 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6151be4a-761e-47a9-96d3-4dab281e7670","Type":"ContainerDied","Data":"5033cb72335596db47242928e7c3a6845ae0f1425524f4b49c3291658112c606"} Feb 16 00:23:46 crc kubenswrapper[4698]: I0216 00:23:46.377354 4698 generic.go:334] "Generic (PLEG): container finished" podID="6151be4a-761e-47a9-96d3-4dab281e7670" containerID="b3aae740df5df5c69fb8a9cfa3fccb16dec56f284e627134cc60bb42004429cd" exitCode=0 Feb 16 00:23:46 crc kubenswrapper[4698]: I0216 00:23:46.377458 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6151be4a-761e-47a9-96d3-4dab281e7670","Type":"ContainerDied","Data":"b3aae740df5df5c69fb8a9cfa3fccb16dec56f284e627134cc60bb42004429cd"} Feb 16 00:23:46 crc kubenswrapper[4698]: I0216 00:23:46.420506 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_6151be4a-761e-47a9-96d3-4dab281e7670/manage-dockerfile/0.log" Feb 16 00:23:47 crc kubenswrapper[4698]: I0216 00:23:47.389971 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6151be4a-761e-47a9-96d3-4dab281e7670","Type":"ContainerStarted","Data":"afb9fe79f9b4a94fbb06ed096a497f5fd950fef151281f7ced30200244dd6ef4"} Feb 16 00:23:47 crc kubenswrapper[4698]: I0216 00:23:47.440256 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=4.440229627 podStartE2EDuration="4.440229627s" podCreationTimestamp="2026-02-16 00:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:23:47.434322423 +0000 UTC m=+1037.092221225" watchObservedRunningTime="2026-02-16 00:23:47.440229627 +0000 UTC m=+1037.098128429" Feb 16 00:24:27 crc kubenswrapper[4698]: I0216 00:24:27.045556 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:24:27 crc kubenswrapper[4698]: I0216 00:24:27.046170 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:24:57 crc kubenswrapper[4698]: I0216 00:24:57.046362 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:24:57 crc kubenswrapper[4698]: I0216 00:24:57.047076 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:25:27 crc kubenswrapper[4698]: I0216 00:25:27.046306 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:25:27 crc kubenswrapper[4698]: I0216 00:25:27.046997 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:25:27 crc kubenswrapper[4698]: I0216 00:25:27.047061 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:25:27 crc kubenswrapper[4698]: I0216 00:25:27.047890 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"39d1b91146f7648e212a74b13f222a511f40648ab7fc41107476755783be4ff2"} pod="openshift-machine-config-operator/machine-config-daemon-z56m2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 00:25:27 crc kubenswrapper[4698]: I0216 00:25:27.047969 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" containerID="cri-o://39d1b91146f7648e212a74b13f222a511f40648ab7fc41107476755783be4ff2" gracePeriod=600 Feb 16 00:25:28 crc kubenswrapper[4698]: I0216 00:25:28.139400 4698 generic.go:334] "Generic (PLEG): container finished" podID="7b351654-277f-4d0d-84f9-b003f934936c" containerID="39d1b91146f7648e212a74b13f222a511f40648ab7fc41107476755783be4ff2" exitCode=0 Feb 16 00:25:28 crc kubenswrapper[4698]: I0216 00:25:28.139476 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" event={"ID":"7b351654-277f-4d0d-84f9-b003f934936c","Type":"ContainerDied","Data":"39d1b91146f7648e212a74b13f222a511f40648ab7fc41107476755783be4ff2"} Feb 16 00:25:28 crc kubenswrapper[4698]: I0216 00:25:28.140861 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" event={"ID":"7b351654-277f-4d0d-84f9-b003f934936c","Type":"ContainerStarted","Data":"8920ac12f28a10d0002a39e030cf4a986f53bbff6cab4c65dd53e3307065853f"} Feb 16 00:25:28 crc kubenswrapper[4698]: I0216 00:25:28.140942 4698 scope.go:117] "RemoveContainer" containerID="d6541b3cb76f710dedaef0b85b0e104e861ef72466cd38ea058959a35248ef97" Feb 16 00:27:23 crc kubenswrapper[4698]: I0216 00:27:23.965832 4698 generic.go:334] "Generic (PLEG): container finished" podID="6151be4a-761e-47a9-96d3-4dab281e7670" containerID="afb9fe79f9b4a94fbb06ed096a497f5fd950fef151281f7ced30200244dd6ef4" exitCode=0 Feb 16 00:27:23 crc kubenswrapper[4698]: I0216 00:27:23.966101 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6151be4a-761e-47a9-96d3-4dab281e7670","Type":"ContainerDied","Data":"afb9fe79f9b4a94fbb06ed096a497f5fd950fef151281f7ced30200244dd6ef4"} Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.262682 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.364429 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6151be4a-761e-47a9-96d3-4dab281e7670-build-system-configs\") pod \"6151be4a-761e-47a9-96d3-4dab281e7670\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.364503 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6151be4a-761e-47a9-96d3-4dab281e7670-build-ca-bundles\") pod \"6151be4a-761e-47a9-96d3-4dab281e7670\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.364544 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6151be4a-761e-47a9-96d3-4dab281e7670-buildcachedir\") pod \"6151be4a-761e-47a9-96d3-4dab281e7670\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.364567 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-container-storage-root\") pod \"6151be4a-761e-47a9-96d3-4dab281e7670\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.364608 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-buildworkdir\") pod \"6151be4a-761e-47a9-96d3-4dab281e7670\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.364690 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6151be4a-761e-47a9-96d3-4dab281e7670-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "6151be4a-761e-47a9-96d3-4dab281e7670" (UID: "6151be4a-761e-47a9-96d3-4dab281e7670"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.364716 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-container-storage-run\") pod \"6151be4a-761e-47a9-96d3-4dab281e7670\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.364853 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6151be4a-761e-47a9-96d3-4dab281e7670-build-proxy-ca-bundles\") pod \"6151be4a-761e-47a9-96d3-4dab281e7670\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.365020 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6151be4a-761e-47a9-96d3-4dab281e7670-node-pullsecrets\") pod \"6151be4a-761e-47a9-96d3-4dab281e7670\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.365047 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqjps\" (UniqueName: \"kubernetes.io/projected/6151be4a-761e-47a9-96d3-4dab281e7670-kube-api-access-hqjps\") pod \"6151be4a-761e-47a9-96d3-4dab281e7670\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.365173 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6151be4a-761e-47a9-96d3-4dab281e7670-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "6151be4a-761e-47a9-96d3-4dab281e7670" (UID: "6151be4a-761e-47a9-96d3-4dab281e7670"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.365749 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6151be4a-761e-47a9-96d3-4dab281e7670-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "6151be4a-761e-47a9-96d3-4dab281e7670" (UID: "6151be4a-761e-47a9-96d3-4dab281e7670"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.365768 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6151be4a-761e-47a9-96d3-4dab281e7670-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "6151be4a-761e-47a9-96d3-4dab281e7670" (UID: "6151be4a-761e-47a9-96d3-4dab281e7670"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.365799 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/6151be4a-761e-47a9-96d3-4dab281e7670-builder-dockercfg-qfmzh-pull\") pod \"6151be4a-761e-47a9-96d3-4dab281e7670\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.365881 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/6151be4a-761e-47a9-96d3-4dab281e7670-builder-dockercfg-qfmzh-push\") pod \"6151be4a-761e-47a9-96d3-4dab281e7670\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.365944 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-build-blob-cache\") pod \"6151be4a-761e-47a9-96d3-4dab281e7670\" (UID: \"6151be4a-761e-47a9-96d3-4dab281e7670\") " Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.365961 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6151be4a-761e-47a9-96d3-4dab281e7670-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "6151be4a-761e-47a9-96d3-4dab281e7670" (UID: "6151be4a-761e-47a9-96d3-4dab281e7670"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.366252 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "6151be4a-761e-47a9-96d3-4dab281e7670" (UID: "6151be4a-761e-47a9-96d3-4dab281e7670"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.366937 4698 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6151be4a-761e-47a9-96d3-4dab281e7670-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.366954 4698 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6151be4a-761e-47a9-96d3-4dab281e7670-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.366962 4698 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6151be4a-761e-47a9-96d3-4dab281e7670-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.366971 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.366979 4698 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6151be4a-761e-47a9-96d3-4dab281e7670-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.366987 4698 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6151be4a-761e-47a9-96d3-4dab281e7670-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.371463 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6151be4a-761e-47a9-96d3-4dab281e7670-builder-dockercfg-qfmzh-pull" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-pull") pod "6151be4a-761e-47a9-96d3-4dab281e7670" (UID: "6151be4a-761e-47a9-96d3-4dab281e7670"). InnerVolumeSpecName "builder-dockercfg-qfmzh-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.371762 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6151be4a-761e-47a9-96d3-4dab281e7670-kube-api-access-hqjps" (OuterVolumeSpecName: "kube-api-access-hqjps") pod "6151be4a-761e-47a9-96d3-4dab281e7670" (UID: "6151be4a-761e-47a9-96d3-4dab281e7670"). InnerVolumeSpecName "kube-api-access-hqjps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.377754 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6151be4a-761e-47a9-96d3-4dab281e7670-builder-dockercfg-qfmzh-push" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-push") pod "6151be4a-761e-47a9-96d3-4dab281e7670" (UID: "6151be4a-761e-47a9-96d3-4dab281e7670"). InnerVolumeSpecName "builder-dockercfg-qfmzh-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.383688 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "6151be4a-761e-47a9-96d3-4dab281e7670" (UID: "6151be4a-761e-47a9-96d3-4dab281e7670"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.468192 4698 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.468227 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqjps\" (UniqueName: \"kubernetes.io/projected/6151be4a-761e-47a9-96d3-4dab281e7670-kube-api-access-hqjps\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.468239 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/6151be4a-761e-47a9-96d3-4dab281e7670-builder-dockercfg-qfmzh-pull\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.474679 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/6151be4a-761e-47a9-96d3-4dab281e7670-builder-dockercfg-qfmzh-push\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.763715 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "6151be4a-761e-47a9-96d3-4dab281e7670" (UID: "6151be4a-761e-47a9-96d3-4dab281e7670"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.777769 4698 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.991363 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6151be4a-761e-47a9-96d3-4dab281e7670","Type":"ContainerDied","Data":"fae79e2276aa15c4d13c00a7ae689faf9df9f76611035872a126f1437cbd6c02"} Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.991425 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fae79e2276aa15c4d13c00a7ae689faf9df9f76611035872a126f1437cbd6c02" Feb 16 00:27:25 crc kubenswrapper[4698]: I0216 00:27:25.991445 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 16 00:27:27 crc kubenswrapper[4698]: I0216 00:27:27.045867 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:27:27 crc kubenswrapper[4698]: I0216 00:27:27.045941 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:27:28 crc kubenswrapper[4698]: I0216 00:27:28.100804 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "6151be4a-761e-47a9-96d3-4dab281e7670" (UID: "6151be4a-761e-47a9-96d3-4dab281e7670"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:27:28 crc kubenswrapper[4698]: I0216 00:27:28.113481 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6151be4a-761e-47a9-96d3-4dab281e7670-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.698075 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 16 00:27:29 crc kubenswrapper[4698]: E0216 00:27:29.698462 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6151be4a-761e-47a9-96d3-4dab281e7670" containerName="docker-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.698485 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6151be4a-761e-47a9-96d3-4dab281e7670" containerName="docker-build" Feb 16 00:27:29 crc kubenswrapper[4698]: E0216 00:27:29.698503 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6151be4a-761e-47a9-96d3-4dab281e7670" containerName="git-clone" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.698516 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6151be4a-761e-47a9-96d3-4dab281e7670" containerName="git-clone" Feb 16 00:27:29 crc kubenswrapper[4698]: E0216 00:27:29.698553 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6151be4a-761e-47a9-96d3-4dab281e7670" containerName="manage-dockerfile" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.698565 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6151be4a-761e-47a9-96d3-4dab281e7670" containerName="manage-dockerfile" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.698769 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="6151be4a-761e-47a9-96d3-4dab281e7670" containerName="docker-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.699792 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.703497 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-qfmzh" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.703604 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-ca" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.703798 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-global-ca" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.707088 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-sys-config" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.720928 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.736457 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.736529 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.736559 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.736590 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-builder-dockercfg-qfmzh-push\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.736627 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.736829 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.736896 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.736952 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.737290 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-builder-dockercfg-qfmzh-pull\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.737531 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4mql\" (UniqueName: \"kubernetes.io/projected/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-kube-api-access-r4mql\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.737570 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.737605 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.838549 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-builder-dockercfg-qfmzh-pull\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.838645 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4mql\" (UniqueName: \"kubernetes.io/projected/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-kube-api-access-r4mql\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.838680 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.838716 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.838753 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.838786 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.838830 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.838858 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.838889 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-builder-dockercfg-qfmzh-push\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.838931 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.838960 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.838995 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.839152 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.839288 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.839332 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.839523 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.839929 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.839979 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.840004 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.840056 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.840478 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.843901 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-builder-dockercfg-qfmzh-push\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.843994 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-builder-dockercfg-qfmzh-pull\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:29 crc kubenswrapper[4698]: I0216 00:27:29.870699 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4mql\" (UniqueName: \"kubernetes.io/projected/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-kube-api-access-r4mql\") pod \"sg-bridge-1-build\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:30 crc kubenswrapper[4698]: I0216 00:27:30.021186 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:30 crc kubenswrapper[4698]: I0216 00:27:30.282847 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 16 00:27:30 crc kubenswrapper[4698]: E0216 00:27:30.747156 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27aaa51c_12df_4d2e_89c5_f0c4d6a691f3.slice/crio-370ec32c037198e76fc305348018ace6d03f67e90f36202e7dc34ac204a4f151.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27aaa51c_12df_4d2e_89c5_f0c4d6a691f3.slice/crio-conmon-370ec32c037198e76fc305348018ace6d03f67e90f36202e7dc34ac204a4f151.scope\": RecentStats: unable to find data in memory cache]" Feb 16 00:27:31 crc kubenswrapper[4698]: I0216 00:27:31.026805 4698 generic.go:334] "Generic (PLEG): container finished" podID="27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" containerID="370ec32c037198e76fc305348018ace6d03f67e90f36202e7dc34ac204a4f151" exitCode=0 Feb 16 00:27:31 crc kubenswrapper[4698]: I0216 00:27:31.026841 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3","Type":"ContainerDied","Data":"370ec32c037198e76fc305348018ace6d03f67e90f36202e7dc34ac204a4f151"} Feb 16 00:27:31 crc kubenswrapper[4698]: I0216 00:27:31.026863 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3","Type":"ContainerStarted","Data":"dd1dc602e915333895efac3ee8e66fb754f4fe6210de330fa845e376663cfae7"} Feb 16 00:27:32 crc kubenswrapper[4698]: I0216 00:27:32.034712 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3","Type":"ContainerStarted","Data":"56be1e33b378063f3e5a144f605dbad8ae243c499d5eb5418645650a45b6dc2d"} Feb 16 00:27:32 crc kubenswrapper[4698]: I0216 00:27:32.057961 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=3.057939952 podStartE2EDuration="3.057939952s" podCreationTimestamp="2026-02-16 00:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:27:32.052606075 +0000 UTC m=+1261.710504837" watchObservedRunningTime="2026-02-16 00:27:32.057939952 +0000 UTC m=+1261.715838714" Feb 16 00:27:39 crc kubenswrapper[4698]: I0216 00:27:39.091005 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_27aaa51c-12df-4d2e-89c5-f0c4d6a691f3/docker-build/0.log" Feb 16 00:27:39 crc kubenswrapper[4698]: I0216 00:27:39.091859 4698 generic.go:334] "Generic (PLEG): container finished" podID="27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" containerID="56be1e33b378063f3e5a144f605dbad8ae243c499d5eb5418645650a45b6dc2d" exitCode=1 Feb 16 00:27:39 crc kubenswrapper[4698]: I0216 00:27:39.091893 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3","Type":"ContainerDied","Data":"56be1e33b378063f3e5a144f605dbad8ae243c499d5eb5418645650a45b6dc2d"} Feb 16 00:27:39 crc kubenswrapper[4698]: I0216 00:27:39.988281 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.403126 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_27aaa51c-12df-4d2e-89c5-f0c4d6a691f3/docker-build/0.log" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.403995 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.587577 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-ca-bundles\") pod \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.587684 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-builder-dockercfg-qfmzh-pull\") pod \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.587705 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-builder-dockercfg-qfmzh-push\") pod \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.587750 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-system-configs\") pod \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.587810 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-container-storage-run\") pod \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.587829 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-container-storage-root\") pod \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.587845 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-buildcachedir\") pod \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.587861 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-buildworkdir\") pod \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.587891 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-node-pullsecrets\") pod \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.587906 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-blob-cache\") pod \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.587932 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4mql\" (UniqueName: \"kubernetes.io/projected/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-kube-api-access-r4mql\") pod \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.587968 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-proxy-ca-bundles\") pod \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\" (UID: \"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3\") " Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.588686 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" (UID: "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.588762 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" (UID: "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.588798 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" (UID: "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.589302 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" (UID: "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.589337 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" (UID: "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.594134 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" (UID: "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.594635 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" (UID: "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.612004 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-builder-dockercfg-qfmzh-push" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-push") pod "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" (UID: "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3"). InnerVolumeSpecName "builder-dockercfg-qfmzh-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.612369 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-kube-api-access-r4mql" (OuterVolumeSpecName: "kube-api-access-r4mql") pod "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" (UID: "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3"). InnerVolumeSpecName "kube-api-access-r4mql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.621143 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-builder-dockercfg-qfmzh-pull" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-pull") pod "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" (UID: "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3"). InnerVolumeSpecName "builder-dockercfg-qfmzh-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.689445 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-builder-dockercfg-qfmzh-pull\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.689480 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-builder-dockercfg-qfmzh-push\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.689491 4698 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.689499 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.689509 4698 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.689517 4698 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.689526 4698 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.689535 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4mql\" (UniqueName: \"kubernetes.io/projected/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-kube-api-access-r4mql\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.689544 4698 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.689551 4698 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.704094 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" (UID: "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.790358 4698 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:40 crc kubenswrapper[4698]: I0216 00:27:40.995213 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" (UID: "27aaa51c-12df-4d2e-89c5-f0c4d6a691f3"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.094036 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.105101 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_27aaa51c-12df-4d2e-89c5-f0c4d6a691f3/docker-build/0.log" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.105429 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"27aaa51c-12df-4d2e-89c5-f0c4d6a691f3","Type":"ContainerDied","Data":"dd1dc602e915333895efac3ee8e66fb754f4fe6210de330fa845e376663cfae7"} Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.105467 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd1dc602e915333895efac3ee8e66fb754f4fe6210de330fa845e376663cfae7" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.105534 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.138551 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.146142 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.247679 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" path="/var/lib/kubelet/pods/27aaa51c-12df-4d2e-89c5-f0c4d6a691f3/volumes" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.615261 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Feb 16 00:27:41 crc kubenswrapper[4698]: E0216 00:27:41.615563 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" containerName="manage-dockerfile" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.615581 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" containerName="manage-dockerfile" Feb 16 00:27:41 crc kubenswrapper[4698]: E0216 00:27:41.615603 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" containerName="docker-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.615632 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" containerName="docker-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.615786 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="27aaa51c-12df-4d2e-89c5-f0c4d6a691f3" containerName="docker-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.616777 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.618354 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-ca" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.618855 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-global-ca" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.619031 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-sys-config" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.619403 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-qfmzh" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.638988 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.702033 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.702095 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmq2w\" (UniqueName: \"kubernetes.io/projected/9f3508c4-cb6b-4a80-80dc-ecac98059016-kube-api-access-vmq2w\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.702122 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.702289 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.702385 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9f3508c4-cb6b-4a80-80dc-ecac98059016-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.702503 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.702552 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/9f3508c4-cb6b-4a80-80dc-ecac98059016-builder-dockercfg-qfmzh-push\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.702630 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.702883 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.702940 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9f3508c4-cb6b-4a80-80dc-ecac98059016-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.703029 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/9f3508c4-cb6b-4a80-80dc-ecac98059016-builder-dockercfg-qfmzh-pull\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.703051 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.804743 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.804807 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9f3508c4-cb6b-4a80-80dc-ecac98059016-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.804850 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/9f3508c4-cb6b-4a80-80dc-ecac98059016-builder-dockercfg-qfmzh-pull\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.804872 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.804899 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.804919 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmq2w\" (UniqueName: \"kubernetes.io/projected/9f3508c4-cb6b-4a80-80dc-ecac98059016-kube-api-access-vmq2w\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.804937 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.804957 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.804981 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9f3508c4-cb6b-4a80-80dc-ecac98059016-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.805014 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.805041 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/9f3508c4-cb6b-4a80-80dc-ecac98059016-builder-dockercfg-qfmzh-push\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.805066 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.805205 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.805485 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.805526 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9f3508c4-cb6b-4a80-80dc-ecac98059016-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.805890 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.806399 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9f3508c4-cb6b-4a80-80dc-ecac98059016-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.806574 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.806646 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.807123 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.807444 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.810648 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/9f3508c4-cb6b-4a80-80dc-ecac98059016-builder-dockercfg-qfmzh-push\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.811571 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/9f3508c4-cb6b-4a80-80dc-ecac98059016-builder-dockercfg-qfmzh-pull\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.837818 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmq2w\" (UniqueName: \"kubernetes.io/projected/9f3508c4-cb6b-4a80-80dc-ecac98059016-kube-api-access-vmq2w\") pod \"sg-bridge-2-build\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:41 crc kubenswrapper[4698]: I0216 00:27:41.935557 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 16 00:27:42 crc kubenswrapper[4698]: I0216 00:27:42.394534 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Feb 16 00:27:43 crc kubenswrapper[4698]: I0216 00:27:43.122193 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"9f3508c4-cb6b-4a80-80dc-ecac98059016","Type":"ContainerStarted","Data":"ed733349e2835587adf0bb4ff64927618dfc8441136f165308ba9d2bbb830ca4"} Feb 16 00:27:43 crc kubenswrapper[4698]: I0216 00:27:43.122683 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"9f3508c4-cb6b-4a80-80dc-ecac98059016","Type":"ContainerStarted","Data":"2bfd3e60828e87866ecec791f1d473dcbbfc1f148c689aab412f833ad9fb9b71"} Feb 16 00:27:44 crc kubenswrapper[4698]: I0216 00:27:44.129476 4698 generic.go:334] "Generic (PLEG): container finished" podID="9f3508c4-cb6b-4a80-80dc-ecac98059016" containerID="ed733349e2835587adf0bb4ff64927618dfc8441136f165308ba9d2bbb830ca4" exitCode=0 Feb 16 00:27:44 crc kubenswrapper[4698]: I0216 00:27:44.129545 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"9f3508c4-cb6b-4a80-80dc-ecac98059016","Type":"ContainerDied","Data":"ed733349e2835587adf0bb4ff64927618dfc8441136f165308ba9d2bbb830ca4"} Feb 16 00:27:45 crc kubenswrapper[4698]: I0216 00:27:45.139749 4698 generic.go:334] "Generic (PLEG): container finished" podID="9f3508c4-cb6b-4a80-80dc-ecac98059016" containerID="de4f8e183b2b9a71f38b9c61f62a13aed2a208259a50f3266a8e63ad50d1ac62" exitCode=0 Feb 16 00:27:45 crc kubenswrapper[4698]: I0216 00:27:45.139835 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"9f3508c4-cb6b-4a80-80dc-ecac98059016","Type":"ContainerDied","Data":"de4f8e183b2b9a71f38b9c61f62a13aed2a208259a50f3266a8e63ad50d1ac62"} Feb 16 00:27:45 crc kubenswrapper[4698]: I0216 00:27:45.189993 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_9f3508c4-cb6b-4a80-80dc-ecac98059016/manage-dockerfile/0.log" Feb 16 00:27:46 crc kubenswrapper[4698]: I0216 00:27:46.152873 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"9f3508c4-cb6b-4a80-80dc-ecac98059016","Type":"ContainerStarted","Data":"0f59e6eefe8b6c3b62809865d1b1ae2996eb67e7ff720f2d6c60cb920efa5aae"} Feb 16 00:27:46 crc kubenswrapper[4698]: I0216 00:27:46.187801 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.187774423 podStartE2EDuration="5.187774423s" podCreationTimestamp="2026-02-16 00:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:27:46.184366028 +0000 UTC m=+1275.842264840" watchObservedRunningTime="2026-02-16 00:27:46.187774423 +0000 UTC m=+1275.845673215" Feb 16 00:27:57 crc kubenswrapper[4698]: I0216 00:27:57.046048 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:27:57 crc kubenswrapper[4698]: I0216 00:27:57.046748 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:28:27 crc kubenswrapper[4698]: I0216 00:28:27.045869 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:28:27 crc kubenswrapper[4698]: I0216 00:28:27.047026 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:28:27 crc kubenswrapper[4698]: I0216 00:28:27.047119 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:28:27 crc kubenswrapper[4698]: I0216 00:28:27.048389 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8920ac12f28a10d0002a39e030cf4a986f53bbff6cab4c65dd53e3307065853f"} pod="openshift-machine-config-operator/machine-config-daemon-z56m2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 00:28:27 crc kubenswrapper[4698]: I0216 00:28:27.048516 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" containerID="cri-o://8920ac12f28a10d0002a39e030cf4a986f53bbff6cab4c65dd53e3307065853f" gracePeriod=600 Feb 16 00:28:27 crc kubenswrapper[4698]: I0216 00:28:27.449650 4698 generic.go:334] "Generic (PLEG): container finished" podID="7b351654-277f-4d0d-84f9-b003f934936c" containerID="8920ac12f28a10d0002a39e030cf4a986f53bbff6cab4c65dd53e3307065853f" exitCode=0 Feb 16 00:28:27 crc kubenswrapper[4698]: I0216 00:28:27.449747 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" event={"ID":"7b351654-277f-4d0d-84f9-b003f934936c","Type":"ContainerDied","Data":"8920ac12f28a10d0002a39e030cf4a986f53bbff6cab4c65dd53e3307065853f"} Feb 16 00:28:27 crc kubenswrapper[4698]: I0216 00:28:27.450079 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" event={"ID":"7b351654-277f-4d0d-84f9-b003f934936c","Type":"ContainerStarted","Data":"6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf"} Feb 16 00:28:27 crc kubenswrapper[4698]: I0216 00:28:27.450110 4698 scope.go:117] "RemoveContainer" containerID="39d1b91146f7648e212a74b13f222a511f40648ab7fc41107476755783be4ff2" Feb 16 00:28:34 crc kubenswrapper[4698]: I0216 00:28:34.512470 4698 generic.go:334] "Generic (PLEG): container finished" podID="9f3508c4-cb6b-4a80-80dc-ecac98059016" containerID="0f59e6eefe8b6c3b62809865d1b1ae2996eb67e7ff720f2d6c60cb920efa5aae" exitCode=0 Feb 16 00:28:34 crc kubenswrapper[4698]: I0216 00:28:34.512577 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"9f3508c4-cb6b-4a80-80dc-ecac98059016","Type":"ContainerDied","Data":"0f59e6eefe8b6c3b62809865d1b1ae2996eb67e7ff720f2d6c60cb920efa5aae"} Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.832705 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.971995 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-buildworkdir\") pod \"9f3508c4-cb6b-4a80-80dc-ecac98059016\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.972280 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-system-configs\") pod \"9f3508c4-cb6b-4a80-80dc-ecac98059016\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.972319 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-ca-bundles\") pod \"9f3508c4-cb6b-4a80-80dc-ecac98059016\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.972335 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-blob-cache\") pod \"9f3508c4-cb6b-4a80-80dc-ecac98059016\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.972356 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9f3508c4-cb6b-4a80-80dc-ecac98059016-node-pullsecrets\") pod \"9f3508c4-cb6b-4a80-80dc-ecac98059016\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.972375 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-container-storage-root\") pod \"9f3508c4-cb6b-4a80-80dc-ecac98059016\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.972411 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-container-storage-run\") pod \"9f3508c4-cb6b-4a80-80dc-ecac98059016\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.972444 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9f3508c4-cb6b-4a80-80dc-ecac98059016-buildcachedir\") pod \"9f3508c4-cb6b-4a80-80dc-ecac98059016\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.972464 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-proxy-ca-bundles\") pod \"9f3508c4-cb6b-4a80-80dc-ecac98059016\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.972480 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/9f3508c4-cb6b-4a80-80dc-ecac98059016-builder-dockercfg-qfmzh-push\") pod \"9f3508c4-cb6b-4a80-80dc-ecac98059016\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.972498 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/9f3508c4-cb6b-4a80-80dc-ecac98059016-builder-dockercfg-qfmzh-pull\") pod \"9f3508c4-cb6b-4a80-80dc-ecac98059016\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.972523 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmq2w\" (UniqueName: \"kubernetes.io/projected/9f3508c4-cb6b-4a80-80dc-ecac98059016-kube-api-access-vmq2w\") pod \"9f3508c4-cb6b-4a80-80dc-ecac98059016\" (UID: \"9f3508c4-cb6b-4a80-80dc-ecac98059016\") " Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.972732 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f3508c4-cb6b-4a80-80dc-ecac98059016-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "9f3508c4-cb6b-4a80-80dc-ecac98059016" (UID: "9f3508c4-cb6b-4a80-80dc-ecac98059016"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.972767 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f3508c4-cb6b-4a80-80dc-ecac98059016-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "9f3508c4-cb6b-4a80-80dc-ecac98059016" (UID: "9f3508c4-cb6b-4a80-80dc-ecac98059016"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.973240 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "9f3508c4-cb6b-4a80-80dc-ecac98059016" (UID: "9f3508c4-cb6b-4a80-80dc-ecac98059016"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.973309 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "9f3508c4-cb6b-4a80-80dc-ecac98059016" (UID: "9f3508c4-cb6b-4a80-80dc-ecac98059016"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.974455 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "9f3508c4-cb6b-4a80-80dc-ecac98059016" (UID: "9f3508c4-cb6b-4a80-80dc-ecac98059016"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.975878 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "9f3508c4-cb6b-4a80-80dc-ecac98059016" (UID: "9f3508c4-cb6b-4a80-80dc-ecac98059016"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.976349 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "9f3508c4-cb6b-4a80-80dc-ecac98059016" (UID: "9f3508c4-cb6b-4a80-80dc-ecac98059016"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.977751 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3508c4-cb6b-4a80-80dc-ecac98059016-builder-dockercfg-qfmzh-pull" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-pull") pod "9f3508c4-cb6b-4a80-80dc-ecac98059016" (UID: "9f3508c4-cb6b-4a80-80dc-ecac98059016"). InnerVolumeSpecName "builder-dockercfg-qfmzh-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.992347 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f3508c4-cb6b-4a80-80dc-ecac98059016-kube-api-access-vmq2w" (OuterVolumeSpecName: "kube-api-access-vmq2w") pod "9f3508c4-cb6b-4a80-80dc-ecac98059016" (UID: "9f3508c4-cb6b-4a80-80dc-ecac98059016"). InnerVolumeSpecName "kube-api-access-vmq2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:28:35 crc kubenswrapper[4698]: I0216 00:28:35.993302 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3508c4-cb6b-4a80-80dc-ecac98059016-builder-dockercfg-qfmzh-push" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-push") pod "9f3508c4-cb6b-4a80-80dc-ecac98059016" (UID: "9f3508c4-cb6b-4a80-80dc-ecac98059016"). InnerVolumeSpecName "builder-dockercfg-qfmzh-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:28:36 crc kubenswrapper[4698]: I0216 00:28:36.073800 4698 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:36 crc kubenswrapper[4698]: I0216 00:28:36.073834 4698 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:36 crc kubenswrapper[4698]: I0216 00:28:36.073844 4698 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:36 crc kubenswrapper[4698]: I0216 00:28:36.073853 4698 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9f3508c4-cb6b-4a80-80dc-ecac98059016-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:36 crc kubenswrapper[4698]: I0216 00:28:36.073861 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:36 crc kubenswrapper[4698]: I0216 00:28:36.073869 4698 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9f3508c4-cb6b-4a80-80dc-ecac98059016-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:36 crc kubenswrapper[4698]: I0216 00:28:36.073877 4698 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:36 crc kubenswrapper[4698]: I0216 00:28:36.073886 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/9f3508c4-cb6b-4a80-80dc-ecac98059016-builder-dockercfg-qfmzh-push\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:36 crc kubenswrapper[4698]: I0216 00:28:36.073895 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/9f3508c4-cb6b-4a80-80dc-ecac98059016-builder-dockercfg-qfmzh-pull\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:36 crc kubenswrapper[4698]: I0216 00:28:36.073903 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmq2w\" (UniqueName: \"kubernetes.io/projected/9f3508c4-cb6b-4a80-80dc-ecac98059016-kube-api-access-vmq2w\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:36 crc kubenswrapper[4698]: I0216 00:28:36.121808 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "9f3508c4-cb6b-4a80-80dc-ecac98059016" (UID: "9f3508c4-cb6b-4a80-80dc-ecac98059016"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:28:36 crc kubenswrapper[4698]: I0216 00:28:36.175406 4698 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:36 crc kubenswrapper[4698]: I0216 00:28:36.534441 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"9f3508c4-cb6b-4a80-80dc-ecac98059016","Type":"ContainerDied","Data":"2bfd3e60828e87866ecec791f1d473dcbbfc1f148c689aab412f833ad9fb9b71"} Feb 16 00:28:36 crc kubenswrapper[4698]: I0216 00:28:36.534509 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bfd3e60828e87866ecec791f1d473dcbbfc1f148c689aab412f833ad9fb9b71" Feb 16 00:28:36 crc kubenswrapper[4698]: I0216 00:28:36.534676 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 16 00:28:36 crc kubenswrapper[4698]: I0216 00:28:36.929883 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "9f3508c4-cb6b-4a80-80dc-ecac98059016" (UID: "9f3508c4-cb6b-4a80-80dc-ecac98059016"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:28:36 crc kubenswrapper[4698]: I0216 00:28:36.995979 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9f3508c4-cb6b-4a80-80dc-ecac98059016-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.336845 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 16 00:28:40 crc kubenswrapper[4698]: E0216 00:28:40.337455 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3508c4-cb6b-4a80-80dc-ecac98059016" containerName="docker-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.337472 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3508c4-cb6b-4a80-80dc-ecac98059016" containerName="docker-build" Feb 16 00:28:40 crc kubenswrapper[4698]: E0216 00:28:40.337488 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3508c4-cb6b-4a80-80dc-ecac98059016" containerName="manage-dockerfile" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.337496 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3508c4-cb6b-4a80-80dc-ecac98059016" containerName="manage-dockerfile" Feb 16 00:28:40 crc kubenswrapper[4698]: E0216 00:28:40.337510 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3508c4-cb6b-4a80-80dc-ecac98059016" containerName="git-clone" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.337518 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3508c4-cb6b-4a80-80dc-ecac98059016" containerName="git-clone" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.337661 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f3508c4-cb6b-4a80-80dc-ecac98059016" containerName="docker-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.338406 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.342078 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-qfmzh" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.342196 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-ca" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.342898 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-global-ca" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.345036 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-sys-config" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.361551 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.442450 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4npxj\" (UniqueName: \"kubernetes.io/projected/e1201e63-447b-4600-b7e0-59ef7a05edb6-kube-api-access-4npxj\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.442539 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.442889 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.442930 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e1201e63-447b-4600-b7e0-59ef7a05edb6-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.443120 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.443226 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.443311 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.443441 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e1201e63-447b-4600-b7e0-59ef7a05edb6-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.443663 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.443728 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.443792 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/e1201e63-447b-4600-b7e0-59ef7a05edb6-builder-dockercfg-qfmzh-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.443899 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/e1201e63-447b-4600-b7e0-59ef7a05edb6-builder-dockercfg-qfmzh-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.544972 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.545069 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e1201e63-447b-4600-b7e0-59ef7a05edb6-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.545148 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.545183 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.545261 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/e1201e63-447b-4600-b7e0-59ef7a05edb6-builder-dockercfg-qfmzh-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.545255 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e1201e63-447b-4600-b7e0-59ef7a05edb6-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.546136 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.546348 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.546696 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.546908 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/e1201e63-447b-4600-b7e0-59ef7a05edb6-builder-dockercfg-qfmzh-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.547013 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4npxj\" (UniqueName: \"kubernetes.io/projected/e1201e63-447b-4600-b7e0-59ef7a05edb6-kube-api-access-4npxj\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.547092 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.547156 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.547210 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e1201e63-447b-4600-b7e0-59ef7a05edb6-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.547261 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.547316 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.547457 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e1201e63-447b-4600-b7e0-59ef7a05edb6-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.547834 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.547896 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.549049 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.549335 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.554927 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/e1201e63-447b-4600-b7e0-59ef7a05edb6-builder-dockercfg-qfmzh-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.555303 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/e1201e63-447b-4600-b7e0-59ef7a05edb6-builder-dockercfg-qfmzh-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.578027 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4npxj\" (UniqueName: \"kubernetes.io/projected/e1201e63-447b-4600-b7e0-59ef7a05edb6-kube-api-access-4npxj\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.656084 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:40 crc kubenswrapper[4698]: I0216 00:28:40.939375 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 16 00:28:41 crc kubenswrapper[4698]: I0216 00:28:41.577516 4698 generic.go:334] "Generic (PLEG): container finished" podID="e1201e63-447b-4600-b7e0-59ef7a05edb6" containerID="45686cdd716729927aec319d74ec8638b08ab16469f1f1ef1b0d97f8bb91ecfb" exitCode=0 Feb 16 00:28:41 crc kubenswrapper[4698]: I0216 00:28:41.577604 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"e1201e63-447b-4600-b7e0-59ef7a05edb6","Type":"ContainerDied","Data":"45686cdd716729927aec319d74ec8638b08ab16469f1f1ef1b0d97f8bb91ecfb"} Feb 16 00:28:41 crc kubenswrapper[4698]: I0216 00:28:41.577706 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"e1201e63-447b-4600-b7e0-59ef7a05edb6","Type":"ContainerStarted","Data":"f15eaa89145e7efac9e0a341d34ae1703349ca6e35bb3af72e213e3dbb41f246"} Feb 16 00:28:42 crc kubenswrapper[4698]: I0216 00:28:42.591345 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"e1201e63-447b-4600-b7e0-59ef7a05edb6","Type":"ContainerStarted","Data":"1349b286b19491442efab56cbc68cc88f463d4029576402dcc0dbe7b9ad1588b"} Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.000673 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=11.000647636 podStartE2EDuration="11.000647636s" podCreationTimestamp="2026-02-16 00:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:28:42.628182244 +0000 UTC m=+1332.286081006" watchObservedRunningTime="2026-02-16 00:28:51.000647636 +0000 UTC m=+1340.658546438" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.008642 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.009073 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="e1201e63-447b-4600-b7e0-59ef7a05edb6" containerName="docker-build" containerID="cri-o://1349b286b19491442efab56cbc68cc88f463d4029576402dcc0dbe7b9ad1588b" gracePeriod=30 Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.448657 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_e1201e63-447b-4600-b7e0-59ef7a05edb6/docker-build/0.log" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.449188 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.508799 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4npxj\" (UniqueName: \"kubernetes.io/projected/e1201e63-447b-4600-b7e0-59ef7a05edb6-kube-api-access-4npxj\") pod \"e1201e63-447b-4600-b7e0-59ef7a05edb6\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.508852 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-buildworkdir\") pod \"e1201e63-447b-4600-b7e0-59ef7a05edb6\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.508880 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-system-configs\") pod \"e1201e63-447b-4600-b7e0-59ef7a05edb6\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.508897 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-blob-cache\") pod \"e1201e63-447b-4600-b7e0-59ef7a05edb6\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.508921 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-container-storage-run\") pod \"e1201e63-447b-4600-b7e0-59ef7a05edb6\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.508934 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e1201e63-447b-4600-b7e0-59ef7a05edb6-buildcachedir\") pod \"e1201e63-447b-4600-b7e0-59ef7a05edb6\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.508953 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-ca-bundles\") pod \"e1201e63-447b-4600-b7e0-59ef7a05edb6\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.508969 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-container-storage-root\") pod \"e1201e63-447b-4600-b7e0-59ef7a05edb6\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.508988 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e1201e63-447b-4600-b7e0-59ef7a05edb6-node-pullsecrets\") pod \"e1201e63-447b-4600-b7e0-59ef7a05edb6\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.509003 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-proxy-ca-bundles\") pod \"e1201e63-447b-4600-b7e0-59ef7a05edb6\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.509018 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/e1201e63-447b-4600-b7e0-59ef7a05edb6-builder-dockercfg-qfmzh-pull\") pod \"e1201e63-447b-4600-b7e0-59ef7a05edb6\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.509034 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/e1201e63-447b-4600-b7e0-59ef7a05edb6-builder-dockercfg-qfmzh-push\") pod \"e1201e63-447b-4600-b7e0-59ef7a05edb6\" (UID: \"e1201e63-447b-4600-b7e0-59ef7a05edb6\") " Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.509714 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1201e63-447b-4600-b7e0-59ef7a05edb6-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "e1201e63-447b-4600-b7e0-59ef7a05edb6" (UID: "e1201e63-447b-4600-b7e0-59ef7a05edb6"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.509851 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "e1201e63-447b-4600-b7e0-59ef7a05edb6" (UID: "e1201e63-447b-4600-b7e0-59ef7a05edb6"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.509963 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1201e63-447b-4600-b7e0-59ef7a05edb6-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "e1201e63-447b-4600-b7e0-59ef7a05edb6" (UID: "e1201e63-447b-4600-b7e0-59ef7a05edb6"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.510577 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "e1201e63-447b-4600-b7e0-59ef7a05edb6" (UID: "e1201e63-447b-4600-b7e0-59ef7a05edb6"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.510747 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "e1201e63-447b-4600-b7e0-59ef7a05edb6" (UID: "e1201e63-447b-4600-b7e0-59ef7a05edb6"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.511632 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "e1201e63-447b-4600-b7e0-59ef7a05edb6" (UID: "e1201e63-447b-4600-b7e0-59ef7a05edb6"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.512189 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "e1201e63-447b-4600-b7e0-59ef7a05edb6" (UID: "e1201e63-447b-4600-b7e0-59ef7a05edb6"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.518656 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1201e63-447b-4600-b7e0-59ef7a05edb6-builder-dockercfg-qfmzh-push" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-push") pod "e1201e63-447b-4600-b7e0-59ef7a05edb6" (UID: "e1201e63-447b-4600-b7e0-59ef7a05edb6"). InnerVolumeSpecName "builder-dockercfg-qfmzh-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.519824 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1201e63-447b-4600-b7e0-59ef7a05edb6-kube-api-access-4npxj" (OuterVolumeSpecName: "kube-api-access-4npxj") pod "e1201e63-447b-4600-b7e0-59ef7a05edb6" (UID: "e1201e63-447b-4600-b7e0-59ef7a05edb6"). InnerVolumeSpecName "kube-api-access-4npxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.533584 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1201e63-447b-4600-b7e0-59ef7a05edb6-builder-dockercfg-qfmzh-pull" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-pull") pod "e1201e63-447b-4600-b7e0-59ef7a05edb6" (UID: "e1201e63-447b-4600-b7e0-59ef7a05edb6"). InnerVolumeSpecName "builder-dockercfg-qfmzh-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.596848 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "e1201e63-447b-4600-b7e0-59ef7a05edb6" (UID: "e1201e63-447b-4600-b7e0-59ef7a05edb6"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.610322 4698 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.610351 4698 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e1201e63-447b-4600-b7e0-59ef7a05edb6-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.610363 4698 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.610378 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/e1201e63-447b-4600-b7e0-59ef7a05edb6-builder-dockercfg-qfmzh-pull\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.610391 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/e1201e63-447b-4600-b7e0-59ef7a05edb6-builder-dockercfg-qfmzh-push\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.610402 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4npxj\" (UniqueName: \"kubernetes.io/projected/e1201e63-447b-4600-b7e0-59ef7a05edb6-kube-api-access-4npxj\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.610414 4698 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.610425 4698 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.610435 4698 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.610446 4698 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e1201e63-447b-4600-b7e0-59ef7a05edb6-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.610456 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.666189 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_e1201e63-447b-4600-b7e0-59ef7a05edb6/docker-build/0.log" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.666905 4698 generic.go:334] "Generic (PLEG): container finished" podID="e1201e63-447b-4600-b7e0-59ef7a05edb6" containerID="1349b286b19491442efab56cbc68cc88f463d4029576402dcc0dbe7b9ad1588b" exitCode=1 Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.666970 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.666962 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"e1201e63-447b-4600-b7e0-59ef7a05edb6","Type":"ContainerDied","Data":"1349b286b19491442efab56cbc68cc88f463d4029576402dcc0dbe7b9ad1588b"} Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.667086 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"e1201e63-447b-4600-b7e0-59ef7a05edb6","Type":"ContainerDied","Data":"f15eaa89145e7efac9e0a341d34ae1703349ca6e35bb3af72e213e3dbb41f246"} Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.667112 4698 scope.go:117] "RemoveContainer" containerID="1349b286b19491442efab56cbc68cc88f463d4029576402dcc0dbe7b9ad1588b" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.701540 4698 scope.go:117] "RemoveContainer" containerID="45686cdd716729927aec319d74ec8638b08ab16469f1f1ef1b0d97f8bb91ecfb" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.741034 4698 scope.go:117] "RemoveContainer" containerID="1349b286b19491442efab56cbc68cc88f463d4029576402dcc0dbe7b9ad1588b" Feb 16 00:28:51 crc kubenswrapper[4698]: E0216 00:28:51.741600 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1349b286b19491442efab56cbc68cc88f463d4029576402dcc0dbe7b9ad1588b\": container with ID starting with 1349b286b19491442efab56cbc68cc88f463d4029576402dcc0dbe7b9ad1588b not found: ID does not exist" containerID="1349b286b19491442efab56cbc68cc88f463d4029576402dcc0dbe7b9ad1588b" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.741685 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1349b286b19491442efab56cbc68cc88f463d4029576402dcc0dbe7b9ad1588b"} err="failed to get container status \"1349b286b19491442efab56cbc68cc88f463d4029576402dcc0dbe7b9ad1588b\": rpc error: code = NotFound desc = could not find container \"1349b286b19491442efab56cbc68cc88f463d4029576402dcc0dbe7b9ad1588b\": container with ID starting with 1349b286b19491442efab56cbc68cc88f463d4029576402dcc0dbe7b9ad1588b not found: ID does not exist" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.741723 4698 scope.go:117] "RemoveContainer" containerID="45686cdd716729927aec319d74ec8638b08ab16469f1f1ef1b0d97f8bb91ecfb" Feb 16 00:28:51 crc kubenswrapper[4698]: E0216 00:28:51.742491 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45686cdd716729927aec319d74ec8638b08ab16469f1f1ef1b0d97f8bb91ecfb\": container with ID starting with 45686cdd716729927aec319d74ec8638b08ab16469f1f1ef1b0d97f8bb91ecfb not found: ID does not exist" containerID="45686cdd716729927aec319d74ec8638b08ab16469f1f1ef1b0d97f8bb91ecfb" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.742545 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45686cdd716729927aec319d74ec8638b08ab16469f1f1ef1b0d97f8bb91ecfb"} err="failed to get container status \"45686cdd716729927aec319d74ec8638b08ab16469f1f1ef1b0d97f8bb91ecfb\": rpc error: code = NotFound desc = could not find container \"45686cdd716729927aec319d74ec8638b08ab16469f1f1ef1b0d97f8bb91ecfb\": container with ID starting with 45686cdd716729927aec319d74ec8638b08ab16469f1f1ef1b0d97f8bb91ecfb not found: ID does not exist" Feb 16 00:28:51 crc kubenswrapper[4698]: I0216 00:28:51.944104 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "e1201e63-447b-4600-b7e0-59ef7a05edb6" (UID: "e1201e63-447b-4600-b7e0-59ef7a05edb6"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.002766 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.011179 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.015631 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e1201e63-447b-4600-b7e0-59ef7a05edb6-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.750592 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Feb 16 00:28:52 crc kubenswrapper[4698]: E0216 00:28:52.751184 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1201e63-447b-4600-b7e0-59ef7a05edb6" containerName="manage-dockerfile" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.751204 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1201e63-447b-4600-b7e0-59ef7a05edb6" containerName="manage-dockerfile" Feb 16 00:28:52 crc kubenswrapper[4698]: E0216 00:28:52.751227 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1201e63-447b-4600-b7e0-59ef7a05edb6" containerName="docker-build" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.751236 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1201e63-447b-4600-b7e0-59ef7a05edb6" containerName="docker-build" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.751368 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1201e63-447b-4600-b7e0-59ef7a05edb6" containerName="docker-build" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.752431 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.757496 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-qfmzh" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.757555 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-ca" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.758053 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-sys-config" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.758802 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-global-ca" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.769360 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.925719 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.925775 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.925809 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.925829 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.926310 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.926356 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.926377 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.926406 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-builder-dockercfg-qfmzh-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.926431 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-builder-dockercfg-qfmzh-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.926449 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.926464 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n42vf\" (UniqueName: \"kubernetes.io/projected/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-kube-api-access-n42vf\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:52 crc kubenswrapper[4698]: I0216 00:28:52.926487 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.028092 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-builder-dockercfg-qfmzh-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.028138 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.028159 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n42vf\" (UniqueName: \"kubernetes.io/projected/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-kube-api-access-n42vf\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.028188 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.028208 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.028232 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.028252 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.028268 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.028291 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.028318 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.028336 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.028365 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-builder-dockercfg-qfmzh-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.028672 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.028817 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.029083 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.029328 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.029448 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.029555 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.029803 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.029884 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.030427 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.032411 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-builder-dockercfg-qfmzh-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.033290 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-builder-dockercfg-qfmzh-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.060276 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n42vf\" (UniqueName: \"kubernetes.io/projected/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-kube-api-access-n42vf\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.071265 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.242959 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1201e63-447b-4600-b7e0-59ef7a05edb6" path="/var/lib/kubelet/pods/e1201e63-447b-4600-b7e0-59ef7a05edb6/volumes" Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.349977 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Feb 16 00:28:53 crc kubenswrapper[4698]: I0216 00:28:53.684430 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e","Type":"ContainerStarted","Data":"a7546fb9320513164660583483d55142e1bef94662c8809bc94001322fba501d"} Feb 16 00:28:54 crc kubenswrapper[4698]: I0216 00:28:54.693125 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e","Type":"ContainerStarted","Data":"bc6c3900e49702a717774fb23f4256bc850df429470df3efc0e610daa1c1c861"} Feb 16 00:28:55 crc kubenswrapper[4698]: I0216 00:28:55.700453 4698 generic.go:334] "Generic (PLEG): container finished" podID="7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" containerID="bc6c3900e49702a717774fb23f4256bc850df429470df3efc0e610daa1c1c861" exitCode=0 Feb 16 00:28:55 crc kubenswrapper[4698]: I0216 00:28:55.700580 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e","Type":"ContainerDied","Data":"bc6c3900e49702a717774fb23f4256bc850df429470df3efc0e610daa1c1c861"} Feb 16 00:28:56 crc kubenswrapper[4698]: I0216 00:28:56.708972 4698 generic.go:334] "Generic (PLEG): container finished" podID="7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" containerID="86304355d6b46327e1c2e36479aa0fae8ec40592e443cca09953e7bc40d68da3" exitCode=0 Feb 16 00:28:56 crc kubenswrapper[4698]: I0216 00:28:56.709013 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e","Type":"ContainerDied","Data":"86304355d6b46327e1c2e36479aa0fae8ec40592e443cca09953e7bc40d68da3"} Feb 16 00:28:56 crc kubenswrapper[4698]: I0216 00:28:56.780378 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_7b5412ce-e1cd-4d56-96ca-0eb9e69af08e/manage-dockerfile/0.log" Feb 16 00:28:57 crc kubenswrapper[4698]: I0216 00:28:57.720134 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e","Type":"ContainerStarted","Data":"aac03c57845d0436820d04e13d3ac81d52de363d8800d2523bd6a5cd3706fe66"} Feb 16 00:28:57 crc kubenswrapper[4698]: I0216 00:28:57.772101 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=5.772083569 podStartE2EDuration="5.772083569s" podCreationTimestamp="2026-02-16 00:28:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:28:57.767468144 +0000 UTC m=+1347.425366906" watchObservedRunningTime="2026-02-16 00:28:57.772083569 +0000 UTC m=+1347.429982331" Feb 16 00:29:20 crc kubenswrapper[4698]: I0216 00:29:20.403986 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f4f8v"] Feb 16 00:29:20 crc kubenswrapper[4698]: I0216 00:29:20.407921 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4f8v" Feb 16 00:29:20 crc kubenswrapper[4698]: I0216 00:29:20.415389 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f4f8v"] Feb 16 00:29:20 crc kubenswrapper[4698]: I0216 00:29:20.527205 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067dfad0-cd2f-4159-a6ec-cab75cd54507-catalog-content\") pod \"redhat-operators-f4f8v\" (UID: \"067dfad0-cd2f-4159-a6ec-cab75cd54507\") " pod="openshift-marketplace/redhat-operators-f4f8v" Feb 16 00:29:20 crc kubenswrapper[4698]: I0216 00:29:20.527270 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067dfad0-cd2f-4159-a6ec-cab75cd54507-utilities\") pod \"redhat-operators-f4f8v\" (UID: \"067dfad0-cd2f-4159-a6ec-cab75cd54507\") " pod="openshift-marketplace/redhat-operators-f4f8v" Feb 16 00:29:20 crc kubenswrapper[4698]: I0216 00:29:20.527334 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9jmw\" (UniqueName: \"kubernetes.io/projected/067dfad0-cd2f-4159-a6ec-cab75cd54507-kube-api-access-j9jmw\") pod \"redhat-operators-f4f8v\" (UID: \"067dfad0-cd2f-4159-a6ec-cab75cd54507\") " pod="openshift-marketplace/redhat-operators-f4f8v" Feb 16 00:29:20 crc kubenswrapper[4698]: I0216 00:29:20.628251 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067dfad0-cd2f-4159-a6ec-cab75cd54507-catalog-content\") pod \"redhat-operators-f4f8v\" (UID: \"067dfad0-cd2f-4159-a6ec-cab75cd54507\") " pod="openshift-marketplace/redhat-operators-f4f8v" Feb 16 00:29:20 crc kubenswrapper[4698]: I0216 00:29:20.628327 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067dfad0-cd2f-4159-a6ec-cab75cd54507-utilities\") pod \"redhat-operators-f4f8v\" (UID: \"067dfad0-cd2f-4159-a6ec-cab75cd54507\") " pod="openshift-marketplace/redhat-operators-f4f8v" Feb 16 00:29:20 crc kubenswrapper[4698]: I0216 00:29:20.628373 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9jmw\" (UniqueName: \"kubernetes.io/projected/067dfad0-cd2f-4159-a6ec-cab75cd54507-kube-api-access-j9jmw\") pod \"redhat-operators-f4f8v\" (UID: \"067dfad0-cd2f-4159-a6ec-cab75cd54507\") " pod="openshift-marketplace/redhat-operators-f4f8v" Feb 16 00:29:20 crc kubenswrapper[4698]: I0216 00:29:20.628857 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067dfad0-cd2f-4159-a6ec-cab75cd54507-catalog-content\") pod \"redhat-operators-f4f8v\" (UID: \"067dfad0-cd2f-4159-a6ec-cab75cd54507\") " pod="openshift-marketplace/redhat-operators-f4f8v" Feb 16 00:29:20 crc kubenswrapper[4698]: I0216 00:29:20.628987 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067dfad0-cd2f-4159-a6ec-cab75cd54507-utilities\") pod \"redhat-operators-f4f8v\" (UID: \"067dfad0-cd2f-4159-a6ec-cab75cd54507\") " pod="openshift-marketplace/redhat-operators-f4f8v" Feb 16 00:29:20 crc kubenswrapper[4698]: I0216 00:29:20.649549 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9jmw\" (UniqueName: \"kubernetes.io/projected/067dfad0-cd2f-4159-a6ec-cab75cd54507-kube-api-access-j9jmw\") pod \"redhat-operators-f4f8v\" (UID: \"067dfad0-cd2f-4159-a6ec-cab75cd54507\") " pod="openshift-marketplace/redhat-operators-f4f8v" Feb 16 00:29:20 crc kubenswrapper[4698]: I0216 00:29:20.729804 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4f8v" Feb 16 00:29:21 crc kubenswrapper[4698]: I0216 00:29:21.003417 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f4f8v"] Feb 16 00:29:21 crc kubenswrapper[4698]: I0216 00:29:21.916086 4698 generic.go:334] "Generic (PLEG): container finished" podID="067dfad0-cd2f-4159-a6ec-cab75cd54507" containerID="873e34b3cc7426b4ea9648db2d236050555491ea711567c9bd8dbec60539e551" exitCode=0 Feb 16 00:29:21 crc kubenswrapper[4698]: I0216 00:29:21.916402 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4f8v" event={"ID":"067dfad0-cd2f-4159-a6ec-cab75cd54507","Type":"ContainerDied","Data":"873e34b3cc7426b4ea9648db2d236050555491ea711567c9bd8dbec60539e551"} Feb 16 00:29:21 crc kubenswrapper[4698]: I0216 00:29:21.916433 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4f8v" event={"ID":"067dfad0-cd2f-4159-a6ec-cab75cd54507","Type":"ContainerStarted","Data":"5548651ec47c3b172dedfe4237157c1d189b70da40ec0fcbf2675f308ba3da95"} Feb 16 00:29:21 crc kubenswrapper[4698]: I0216 00:29:21.919160 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 00:29:22 crc kubenswrapper[4698]: I0216 00:29:22.925281 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4f8v" event={"ID":"067dfad0-cd2f-4159-a6ec-cab75cd54507","Type":"ContainerStarted","Data":"71134ae3a05896da117ebb14cb1a7233b06eefeb5f2d9ab9cabaa828eae69b3f"} Feb 16 00:29:23 crc kubenswrapper[4698]: I0216 00:29:23.937422 4698 generic.go:334] "Generic (PLEG): container finished" podID="067dfad0-cd2f-4159-a6ec-cab75cd54507" containerID="71134ae3a05896da117ebb14cb1a7233b06eefeb5f2d9ab9cabaa828eae69b3f" exitCode=0 Feb 16 00:29:23 crc kubenswrapper[4698]: I0216 00:29:23.937503 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4f8v" event={"ID":"067dfad0-cd2f-4159-a6ec-cab75cd54507","Type":"ContainerDied","Data":"71134ae3a05896da117ebb14cb1a7233b06eefeb5f2d9ab9cabaa828eae69b3f"} Feb 16 00:29:24 crc kubenswrapper[4698]: I0216 00:29:24.944528 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4f8v" event={"ID":"067dfad0-cd2f-4159-a6ec-cab75cd54507","Type":"ContainerStarted","Data":"ecf0243be17a2d30f4ef7dc11ea0cd586914952fb386f93b2e144ee814493033"} Feb 16 00:29:24 crc kubenswrapper[4698]: I0216 00:29:24.964502 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f4f8v" podStartSLOduration=2.5478410240000002 podStartE2EDuration="4.964483508s" podCreationTimestamp="2026-02-16 00:29:20 +0000 UTC" firstStartedPulling="2026-02-16 00:29:21.918953106 +0000 UTC m=+1371.576851868" lastFinishedPulling="2026-02-16 00:29:24.33559559 +0000 UTC m=+1373.993494352" observedRunningTime="2026-02-16 00:29:24.960657398 +0000 UTC m=+1374.618556160" watchObservedRunningTime="2026-02-16 00:29:24.964483508 +0000 UTC m=+1374.622382270" Feb 16 00:29:30 crc kubenswrapper[4698]: I0216 00:29:30.730449 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f4f8v" Feb 16 00:29:30 crc kubenswrapper[4698]: I0216 00:29:30.730920 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f4f8v" Feb 16 00:29:31 crc kubenswrapper[4698]: I0216 00:29:31.798860 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f4f8v" podUID="067dfad0-cd2f-4159-a6ec-cab75cd54507" containerName="registry-server" probeResult="failure" output=< Feb 16 00:29:31 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Feb 16 00:29:31 crc kubenswrapper[4698]: > Feb 16 00:29:40 crc kubenswrapper[4698]: I0216 00:29:40.803255 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f4f8v" Feb 16 00:29:40 crc kubenswrapper[4698]: I0216 00:29:40.891197 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f4f8v" Feb 16 00:29:41 crc kubenswrapper[4698]: I0216 00:29:41.061047 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4f8v"] Feb 16 00:29:42 crc kubenswrapper[4698]: I0216 00:29:42.060178 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f4f8v" podUID="067dfad0-cd2f-4159-a6ec-cab75cd54507" containerName="registry-server" containerID="cri-o://ecf0243be17a2d30f4ef7dc11ea0cd586914952fb386f93b2e144ee814493033" gracePeriod=2 Feb 16 00:29:42 crc kubenswrapper[4698]: I0216 00:29:42.527757 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4f8v" Feb 16 00:29:42 crc kubenswrapper[4698]: I0216 00:29:42.641385 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9jmw\" (UniqueName: \"kubernetes.io/projected/067dfad0-cd2f-4159-a6ec-cab75cd54507-kube-api-access-j9jmw\") pod \"067dfad0-cd2f-4159-a6ec-cab75cd54507\" (UID: \"067dfad0-cd2f-4159-a6ec-cab75cd54507\") " Feb 16 00:29:42 crc kubenswrapper[4698]: I0216 00:29:42.641517 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067dfad0-cd2f-4159-a6ec-cab75cd54507-catalog-content\") pod \"067dfad0-cd2f-4159-a6ec-cab75cd54507\" (UID: \"067dfad0-cd2f-4159-a6ec-cab75cd54507\") " Feb 16 00:29:42 crc kubenswrapper[4698]: I0216 00:29:42.641649 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067dfad0-cd2f-4159-a6ec-cab75cd54507-utilities\") pod \"067dfad0-cd2f-4159-a6ec-cab75cd54507\" (UID: \"067dfad0-cd2f-4159-a6ec-cab75cd54507\") " Feb 16 00:29:42 crc kubenswrapper[4698]: I0216 00:29:42.642529 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/067dfad0-cd2f-4159-a6ec-cab75cd54507-utilities" (OuterVolumeSpecName: "utilities") pod "067dfad0-cd2f-4159-a6ec-cab75cd54507" (UID: "067dfad0-cd2f-4159-a6ec-cab75cd54507"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:29:42 crc kubenswrapper[4698]: I0216 00:29:42.648074 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/067dfad0-cd2f-4159-a6ec-cab75cd54507-kube-api-access-j9jmw" (OuterVolumeSpecName: "kube-api-access-j9jmw") pod "067dfad0-cd2f-4159-a6ec-cab75cd54507" (UID: "067dfad0-cd2f-4159-a6ec-cab75cd54507"). InnerVolumeSpecName "kube-api-access-j9jmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:29:42 crc kubenswrapper[4698]: E0216 00:29:42.728951 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b5412ce_e1cd_4d56_96ca_0eb9e69af08e.slice/buildah-buildah2651216699\": RecentStats: unable to find data in memory cache]" Feb 16 00:29:42 crc kubenswrapper[4698]: I0216 00:29:42.742777 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067dfad0-cd2f-4159-a6ec-cab75cd54507-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:29:42 crc kubenswrapper[4698]: I0216 00:29:42.742806 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9jmw\" (UniqueName: \"kubernetes.io/projected/067dfad0-cd2f-4159-a6ec-cab75cd54507-kube-api-access-j9jmw\") on node \"crc\" DevicePath \"\"" Feb 16 00:29:42 crc kubenswrapper[4698]: I0216 00:29:42.768187 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/067dfad0-cd2f-4159-a6ec-cab75cd54507-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "067dfad0-cd2f-4159-a6ec-cab75cd54507" (UID: "067dfad0-cd2f-4159-a6ec-cab75cd54507"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:29:42 crc kubenswrapper[4698]: I0216 00:29:42.844383 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067dfad0-cd2f-4159-a6ec-cab75cd54507-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:29:43 crc kubenswrapper[4698]: I0216 00:29:43.069683 4698 generic.go:334] "Generic (PLEG): container finished" podID="067dfad0-cd2f-4159-a6ec-cab75cd54507" containerID="ecf0243be17a2d30f4ef7dc11ea0cd586914952fb386f93b2e144ee814493033" exitCode=0 Feb 16 00:29:43 crc kubenswrapper[4698]: I0216 00:29:43.069735 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4f8v" event={"ID":"067dfad0-cd2f-4159-a6ec-cab75cd54507","Type":"ContainerDied","Data":"ecf0243be17a2d30f4ef7dc11ea0cd586914952fb386f93b2e144ee814493033"} Feb 16 00:29:43 crc kubenswrapper[4698]: I0216 00:29:43.069774 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4f8v" event={"ID":"067dfad0-cd2f-4159-a6ec-cab75cd54507","Type":"ContainerDied","Data":"5548651ec47c3b172dedfe4237157c1d189b70da40ec0fcbf2675f308ba3da95"} Feb 16 00:29:43 crc kubenswrapper[4698]: I0216 00:29:43.069788 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4f8v" Feb 16 00:29:43 crc kubenswrapper[4698]: I0216 00:29:43.069794 4698 scope.go:117] "RemoveContainer" containerID="ecf0243be17a2d30f4ef7dc11ea0cd586914952fb386f93b2e144ee814493033" Feb 16 00:29:43 crc kubenswrapper[4698]: I0216 00:29:43.097079 4698 scope.go:117] "RemoveContainer" containerID="71134ae3a05896da117ebb14cb1a7233b06eefeb5f2d9ab9cabaa828eae69b3f" Feb 16 00:29:43 crc kubenswrapper[4698]: I0216 00:29:43.114757 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4f8v"] Feb 16 00:29:43 crc kubenswrapper[4698]: I0216 00:29:43.123698 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f4f8v"] Feb 16 00:29:43 crc kubenswrapper[4698]: I0216 00:29:43.139503 4698 scope.go:117] "RemoveContainer" containerID="873e34b3cc7426b4ea9648db2d236050555491ea711567c9bd8dbec60539e551" Feb 16 00:29:43 crc kubenswrapper[4698]: I0216 00:29:43.159933 4698 scope.go:117] "RemoveContainer" containerID="ecf0243be17a2d30f4ef7dc11ea0cd586914952fb386f93b2e144ee814493033" Feb 16 00:29:43 crc kubenswrapper[4698]: E0216 00:29:43.161824 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecf0243be17a2d30f4ef7dc11ea0cd586914952fb386f93b2e144ee814493033\": container with ID starting with ecf0243be17a2d30f4ef7dc11ea0cd586914952fb386f93b2e144ee814493033 not found: ID does not exist" containerID="ecf0243be17a2d30f4ef7dc11ea0cd586914952fb386f93b2e144ee814493033" Feb 16 00:29:43 crc kubenswrapper[4698]: I0216 00:29:43.161869 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecf0243be17a2d30f4ef7dc11ea0cd586914952fb386f93b2e144ee814493033"} err="failed to get container status \"ecf0243be17a2d30f4ef7dc11ea0cd586914952fb386f93b2e144ee814493033\": rpc error: code = NotFound desc = could not find container \"ecf0243be17a2d30f4ef7dc11ea0cd586914952fb386f93b2e144ee814493033\": container with ID starting with ecf0243be17a2d30f4ef7dc11ea0cd586914952fb386f93b2e144ee814493033 not found: ID does not exist" Feb 16 00:29:43 crc kubenswrapper[4698]: I0216 00:29:43.161900 4698 scope.go:117] "RemoveContainer" containerID="71134ae3a05896da117ebb14cb1a7233b06eefeb5f2d9ab9cabaa828eae69b3f" Feb 16 00:29:43 crc kubenswrapper[4698]: E0216 00:29:43.162349 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71134ae3a05896da117ebb14cb1a7233b06eefeb5f2d9ab9cabaa828eae69b3f\": container with ID starting with 71134ae3a05896da117ebb14cb1a7233b06eefeb5f2d9ab9cabaa828eae69b3f not found: ID does not exist" containerID="71134ae3a05896da117ebb14cb1a7233b06eefeb5f2d9ab9cabaa828eae69b3f" Feb 16 00:29:43 crc kubenswrapper[4698]: I0216 00:29:43.162413 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71134ae3a05896da117ebb14cb1a7233b06eefeb5f2d9ab9cabaa828eae69b3f"} err="failed to get container status \"71134ae3a05896da117ebb14cb1a7233b06eefeb5f2d9ab9cabaa828eae69b3f\": rpc error: code = NotFound desc = could not find container \"71134ae3a05896da117ebb14cb1a7233b06eefeb5f2d9ab9cabaa828eae69b3f\": container with ID starting with 71134ae3a05896da117ebb14cb1a7233b06eefeb5f2d9ab9cabaa828eae69b3f not found: ID does not exist" Feb 16 00:29:43 crc kubenswrapper[4698]: I0216 00:29:43.162453 4698 scope.go:117] "RemoveContainer" containerID="873e34b3cc7426b4ea9648db2d236050555491ea711567c9bd8dbec60539e551" Feb 16 00:29:43 crc kubenswrapper[4698]: E0216 00:29:43.163071 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"873e34b3cc7426b4ea9648db2d236050555491ea711567c9bd8dbec60539e551\": container with ID starting with 873e34b3cc7426b4ea9648db2d236050555491ea711567c9bd8dbec60539e551 not found: ID does not exist" containerID="873e34b3cc7426b4ea9648db2d236050555491ea711567c9bd8dbec60539e551" Feb 16 00:29:43 crc kubenswrapper[4698]: I0216 00:29:43.163103 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"873e34b3cc7426b4ea9648db2d236050555491ea711567c9bd8dbec60539e551"} err="failed to get container status \"873e34b3cc7426b4ea9648db2d236050555491ea711567c9bd8dbec60539e551\": rpc error: code = NotFound desc = could not find container \"873e34b3cc7426b4ea9648db2d236050555491ea711567c9bd8dbec60539e551\": container with ID starting with 873e34b3cc7426b4ea9648db2d236050555491ea711567c9bd8dbec60539e551 not found: ID does not exist" Feb 16 00:29:43 crc kubenswrapper[4698]: I0216 00:29:43.241064 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="067dfad0-cd2f-4159-a6ec-cab75cd54507" path="/var/lib/kubelet/pods/067dfad0-cd2f-4159-a6ec-cab75cd54507/volumes" Feb 16 00:29:50 crc kubenswrapper[4698]: I0216 00:29:50.115869 4698 generic.go:334] "Generic (PLEG): container finished" podID="7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" containerID="aac03c57845d0436820d04e13d3ac81d52de363d8800d2523bd6a5cd3706fe66" exitCode=0 Feb 16 00:29:50 crc kubenswrapper[4698]: I0216 00:29:50.115943 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e","Type":"ContainerDied","Data":"aac03c57845d0436820d04e13d3ac81d52de363d8800d2523bd6a5cd3706fe66"} Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.421528 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.574795 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-node-pullsecrets\") pod \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.574839 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-buildcachedir\") pod \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.574884 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-buildworkdir\") pod \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.574928 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-blob-cache\") pod \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.574972 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-proxy-ca-bundles\") pod \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.575028 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-builder-dockercfg-qfmzh-push\") pod \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.575053 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-container-storage-run\") pod \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.575076 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-builder-dockercfg-qfmzh-pull\") pod \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.575057 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" (UID: "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.575108 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n42vf\" (UniqueName: \"kubernetes.io/projected/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-kube-api-access-n42vf\") pod \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.575136 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-container-storage-root\") pod \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.575199 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-system-configs\") pod \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.575227 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-ca-bundles\") pod \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\" (UID: \"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e\") " Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.575491 4698 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.575791 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" (UID: "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.576467 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" (UID: "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.577029 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" (UID: "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.577336 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" (UID: "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.577942 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" (UID: "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.580795 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" (UID: "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.582732 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-builder-dockercfg-qfmzh-push" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-push") pod "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" (UID: "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e"). InnerVolumeSpecName "builder-dockercfg-qfmzh-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.583209 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-builder-dockercfg-qfmzh-pull" (OuterVolumeSpecName: "builder-dockercfg-qfmzh-pull") pod "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" (UID: "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e"). InnerVolumeSpecName "builder-dockercfg-qfmzh-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.585787 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-kube-api-access-n42vf" (OuterVolumeSpecName: "kube-api-access-n42vf") pod "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" (UID: "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e"). InnerVolumeSpecName "kube-api-access-n42vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.676874 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n42vf\" (UniqueName: \"kubernetes.io/projected/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-kube-api-access-n42vf\") on node \"crc\" DevicePath \"\"" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.676929 4698 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.676951 4698 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.676972 4698 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.676992 4698 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.677011 4698 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.677032 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-push\" (UniqueName: \"kubernetes.io/secret/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-builder-dockercfg-qfmzh-push\") on node \"crc\" DevicePath \"\"" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.677051 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.677070 4698 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-qfmzh-pull\" (UniqueName: \"kubernetes.io/secret/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-builder-dockercfg-qfmzh-pull\") on node \"crc\" DevicePath \"\"" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.683064 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" (UID: "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:29:51 crc kubenswrapper[4698]: I0216 00:29:51.778552 4698 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 16 00:29:52 crc kubenswrapper[4698]: I0216 00:29:52.136663 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"7b5412ce-e1cd-4d56-96ca-0eb9e69af08e","Type":"ContainerDied","Data":"a7546fb9320513164660583483d55142e1bef94662c8809bc94001322fba501d"} Feb 16 00:29:52 crc kubenswrapper[4698]: I0216 00:29:52.137045 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7546fb9320513164660583483d55142e1bef94662c8809bc94001322fba501d" Feb 16 00:29:52 crc kubenswrapper[4698]: I0216 00:29:52.136777 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 16 00:29:52 crc kubenswrapper[4698]: I0216 00:29:52.796756 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" (UID: "7b5412ce-e1cd-4d56-96ca-0eb9e69af08e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:29:52 crc kubenswrapper[4698]: I0216 00:29:52.800154 4698 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7b5412ce-e1cd-4d56-96ca-0eb9e69af08e-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 16 00:29:57 crc kubenswrapper[4698]: I0216 00:29:57.470694 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-6f55f6c4c5-56jtc"] Feb 16 00:29:57 crc kubenswrapper[4698]: E0216 00:29:57.471338 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067dfad0-cd2f-4159-a6ec-cab75cd54507" containerName="extract-content" Feb 16 00:29:57 crc kubenswrapper[4698]: I0216 00:29:57.471358 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="067dfad0-cd2f-4159-a6ec-cab75cd54507" containerName="extract-content" Feb 16 00:29:57 crc kubenswrapper[4698]: E0216 00:29:57.471375 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" containerName="git-clone" Feb 16 00:29:57 crc kubenswrapper[4698]: I0216 00:29:57.471388 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" containerName="git-clone" Feb 16 00:29:57 crc kubenswrapper[4698]: E0216 00:29:57.471407 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" containerName="manage-dockerfile" Feb 16 00:29:57 crc kubenswrapper[4698]: I0216 00:29:57.471420 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" containerName="manage-dockerfile" Feb 16 00:29:57 crc kubenswrapper[4698]: E0216 00:29:57.471440 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" containerName="docker-build" Feb 16 00:29:57 crc kubenswrapper[4698]: I0216 00:29:57.471451 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" containerName="docker-build" Feb 16 00:29:57 crc kubenswrapper[4698]: E0216 00:29:57.471476 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067dfad0-cd2f-4159-a6ec-cab75cd54507" containerName="registry-server" Feb 16 00:29:57 crc kubenswrapper[4698]: I0216 00:29:57.471491 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="067dfad0-cd2f-4159-a6ec-cab75cd54507" containerName="registry-server" Feb 16 00:29:57 crc kubenswrapper[4698]: E0216 00:29:57.471509 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067dfad0-cd2f-4159-a6ec-cab75cd54507" containerName="extract-utilities" Feb 16 00:29:57 crc kubenswrapper[4698]: I0216 00:29:57.471521 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="067dfad0-cd2f-4159-a6ec-cab75cd54507" containerName="extract-utilities" Feb 16 00:29:57 crc kubenswrapper[4698]: I0216 00:29:57.472043 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="067dfad0-cd2f-4159-a6ec-cab75cd54507" containerName="registry-server" Feb 16 00:29:57 crc kubenswrapper[4698]: I0216 00:29:57.472069 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b5412ce-e1cd-4d56-96ca-0eb9e69af08e" containerName="docker-build" Feb 16 00:29:57 crc kubenswrapper[4698]: I0216 00:29:57.472740 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-6f55f6c4c5-56jtc" Feb 16 00:29:57 crc kubenswrapper[4698]: I0216 00:29:57.475092 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-jmjzt" Feb 16 00:29:57 crc kubenswrapper[4698]: I0216 00:29:57.503070 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-6f55f6c4c5-56jtc"] Feb 16 00:29:57 crc kubenswrapper[4698]: I0216 00:29:57.567600 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p69t8\" (UniqueName: \"kubernetes.io/projected/b896217b-c297-4228-8940-d2e0a2f7547f-kube-api-access-p69t8\") pod \"smart-gateway-operator-6f55f6c4c5-56jtc\" (UID: \"b896217b-c297-4228-8940-d2e0a2f7547f\") " pod="service-telemetry/smart-gateway-operator-6f55f6c4c5-56jtc" Feb 16 00:29:57 crc kubenswrapper[4698]: I0216 00:29:57.567773 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/b896217b-c297-4228-8940-d2e0a2f7547f-runner\") pod \"smart-gateway-operator-6f55f6c4c5-56jtc\" (UID: \"b896217b-c297-4228-8940-d2e0a2f7547f\") " pod="service-telemetry/smart-gateway-operator-6f55f6c4c5-56jtc" Feb 16 00:29:57 crc kubenswrapper[4698]: I0216 00:29:57.669551 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/b896217b-c297-4228-8940-d2e0a2f7547f-runner\") pod \"smart-gateway-operator-6f55f6c4c5-56jtc\" (UID: \"b896217b-c297-4228-8940-d2e0a2f7547f\") " pod="service-telemetry/smart-gateway-operator-6f55f6c4c5-56jtc" Feb 16 00:29:57 crc kubenswrapper[4698]: I0216 00:29:57.670004 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p69t8\" (UniqueName: \"kubernetes.io/projected/b896217b-c297-4228-8940-d2e0a2f7547f-kube-api-access-p69t8\") pod \"smart-gateway-operator-6f55f6c4c5-56jtc\" (UID: \"b896217b-c297-4228-8940-d2e0a2f7547f\") " pod="service-telemetry/smart-gateway-operator-6f55f6c4c5-56jtc" Feb 16 00:29:57 crc kubenswrapper[4698]: I0216 00:29:57.670670 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/b896217b-c297-4228-8940-d2e0a2f7547f-runner\") pod \"smart-gateway-operator-6f55f6c4c5-56jtc\" (UID: \"b896217b-c297-4228-8940-d2e0a2f7547f\") " pod="service-telemetry/smart-gateway-operator-6f55f6c4c5-56jtc" Feb 16 00:29:57 crc kubenswrapper[4698]: I0216 00:29:57.698820 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p69t8\" (UniqueName: \"kubernetes.io/projected/b896217b-c297-4228-8940-d2e0a2f7547f-kube-api-access-p69t8\") pod \"smart-gateway-operator-6f55f6c4c5-56jtc\" (UID: \"b896217b-c297-4228-8940-d2e0a2f7547f\") " pod="service-telemetry/smart-gateway-operator-6f55f6c4c5-56jtc" Feb 16 00:29:57 crc kubenswrapper[4698]: I0216 00:29:57.800191 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-6f55f6c4c5-56jtc" Feb 16 00:29:58 crc kubenswrapper[4698]: I0216 00:29:58.052958 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-6f55f6c4c5-56jtc"] Feb 16 00:29:58 crc kubenswrapper[4698]: W0216 00:29:58.057827 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb896217b_c297_4228_8940_d2e0a2f7547f.slice/crio-8d88309e0c75441f0887f5b37a24b9667f6d91ab90c3169b7d3c3712d84d00d4 WatchSource:0}: Error finding container 8d88309e0c75441f0887f5b37a24b9667f6d91ab90c3169b7d3c3712d84d00d4: Status 404 returned error can't find the container with id 8d88309e0c75441f0887f5b37a24b9667f6d91ab90c3169b7d3c3712d84d00d4 Feb 16 00:29:58 crc kubenswrapper[4698]: I0216 00:29:58.201236 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-6f55f6c4c5-56jtc" event={"ID":"b896217b-c297-4228-8940-d2e0a2f7547f","Type":"ContainerStarted","Data":"8d88309e0c75441f0887f5b37a24b9667f6d91ab90c3169b7d3c3712d84d00d4"} Feb 16 00:30:00 crc kubenswrapper[4698]: I0216 00:30:00.141954 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520030-z5tg9"] Feb 16 00:30:00 crc kubenswrapper[4698]: I0216 00:30:00.143312 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-z5tg9" Feb 16 00:30:00 crc kubenswrapper[4698]: I0216 00:30:00.145024 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 00:30:00 crc kubenswrapper[4698]: I0216 00:30:00.148146 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 00:30:00 crc kubenswrapper[4698]: I0216 00:30:00.148772 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520030-z5tg9"] Feb 16 00:30:00 crc kubenswrapper[4698]: I0216 00:30:00.307065 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4501cd31-2161-4934-ae5a-4928392d1ed6-secret-volume\") pod \"collect-profiles-29520030-z5tg9\" (UID: \"4501cd31-2161-4934-ae5a-4928392d1ed6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-z5tg9" Feb 16 00:30:00 crc kubenswrapper[4698]: I0216 00:30:00.307136 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4501cd31-2161-4934-ae5a-4928392d1ed6-config-volume\") pod \"collect-profiles-29520030-z5tg9\" (UID: \"4501cd31-2161-4934-ae5a-4928392d1ed6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-z5tg9" Feb 16 00:30:00 crc kubenswrapper[4698]: I0216 00:30:00.307202 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfkql\" (UniqueName: \"kubernetes.io/projected/4501cd31-2161-4934-ae5a-4928392d1ed6-kube-api-access-xfkql\") pod \"collect-profiles-29520030-z5tg9\" (UID: \"4501cd31-2161-4934-ae5a-4928392d1ed6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-z5tg9" Feb 16 00:30:00 crc kubenswrapper[4698]: I0216 00:30:00.408215 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfkql\" (UniqueName: \"kubernetes.io/projected/4501cd31-2161-4934-ae5a-4928392d1ed6-kube-api-access-xfkql\") pod \"collect-profiles-29520030-z5tg9\" (UID: \"4501cd31-2161-4934-ae5a-4928392d1ed6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-z5tg9" Feb 16 00:30:00 crc kubenswrapper[4698]: I0216 00:30:00.408292 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4501cd31-2161-4934-ae5a-4928392d1ed6-secret-volume\") pod \"collect-profiles-29520030-z5tg9\" (UID: \"4501cd31-2161-4934-ae5a-4928392d1ed6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-z5tg9" Feb 16 00:30:00 crc kubenswrapper[4698]: I0216 00:30:00.408373 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4501cd31-2161-4934-ae5a-4928392d1ed6-config-volume\") pod \"collect-profiles-29520030-z5tg9\" (UID: \"4501cd31-2161-4934-ae5a-4928392d1ed6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-z5tg9" Feb 16 00:30:00 crc kubenswrapper[4698]: I0216 00:30:00.409308 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4501cd31-2161-4934-ae5a-4928392d1ed6-config-volume\") pod \"collect-profiles-29520030-z5tg9\" (UID: \"4501cd31-2161-4934-ae5a-4928392d1ed6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-z5tg9" Feb 16 00:30:00 crc kubenswrapper[4698]: I0216 00:30:00.414357 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4501cd31-2161-4934-ae5a-4928392d1ed6-secret-volume\") pod \"collect-profiles-29520030-z5tg9\" (UID: \"4501cd31-2161-4934-ae5a-4928392d1ed6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-z5tg9" Feb 16 00:30:00 crc kubenswrapper[4698]: I0216 00:30:00.424418 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfkql\" (UniqueName: \"kubernetes.io/projected/4501cd31-2161-4934-ae5a-4928392d1ed6-kube-api-access-xfkql\") pod \"collect-profiles-29520030-z5tg9\" (UID: \"4501cd31-2161-4934-ae5a-4928392d1ed6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-z5tg9" Feb 16 00:30:00 crc kubenswrapper[4698]: I0216 00:30:00.472469 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-z5tg9" Feb 16 00:30:00 crc kubenswrapper[4698]: I0216 00:30:00.715796 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520030-z5tg9"] Feb 16 00:30:00 crc kubenswrapper[4698]: W0216 00:30:00.722271 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4501cd31_2161_4934_ae5a_4928392d1ed6.slice/crio-3d26bac62a6d088bc3607d4c00e89d1c4a35563b9ae2a8833dd01043de33d2e1 WatchSource:0}: Error finding container 3d26bac62a6d088bc3607d4c00e89d1c4a35563b9ae2a8833dd01043de33d2e1: Status 404 returned error can't find the container with id 3d26bac62a6d088bc3607d4c00e89d1c4a35563b9ae2a8833dd01043de33d2e1 Feb 16 00:30:01 crc kubenswrapper[4698]: I0216 00:30:01.232507 4698 generic.go:334] "Generic (PLEG): container finished" podID="4501cd31-2161-4934-ae5a-4928392d1ed6" containerID="1fad8e08507b133d6866d602ca92edec122360d97833664dd6cdc7a0511b4d80" exitCode=0 Feb 16 00:30:01 crc kubenswrapper[4698]: I0216 00:30:01.256122 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-z5tg9" event={"ID":"4501cd31-2161-4934-ae5a-4928392d1ed6","Type":"ContainerDied","Data":"1fad8e08507b133d6866d602ca92edec122360d97833664dd6cdc7a0511b4d80"} Feb 16 00:30:01 crc kubenswrapper[4698]: I0216 00:30:01.256169 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-z5tg9" event={"ID":"4501cd31-2161-4934-ae5a-4928392d1ed6","Type":"ContainerStarted","Data":"3d26bac62a6d088bc3607d4c00e89d1c4a35563b9ae2a8833dd01043de33d2e1"} Feb 16 00:30:03 crc kubenswrapper[4698]: I0216 00:30:03.313392 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-59bf6579cc-jbsp9"] Feb 16 00:30:03 crc kubenswrapper[4698]: I0216 00:30:03.314669 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-59bf6579cc-jbsp9" Feb 16 00:30:03 crc kubenswrapper[4698]: I0216 00:30:03.316995 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-gf9t6" Feb 16 00:30:03 crc kubenswrapper[4698]: I0216 00:30:03.318126 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-59bf6579cc-jbsp9"] Feb 16 00:30:03 crc kubenswrapper[4698]: I0216 00:30:03.455326 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crzc4\" (UniqueName: \"kubernetes.io/projected/e158a3a2-4367-4cfb-8d31-085f96d9dc6a-kube-api-access-crzc4\") pod \"service-telemetry-operator-59bf6579cc-jbsp9\" (UID: \"e158a3a2-4367-4cfb-8d31-085f96d9dc6a\") " pod="service-telemetry/service-telemetry-operator-59bf6579cc-jbsp9" Feb 16 00:30:03 crc kubenswrapper[4698]: I0216 00:30:03.455379 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/e158a3a2-4367-4cfb-8d31-085f96d9dc6a-runner\") pod \"service-telemetry-operator-59bf6579cc-jbsp9\" (UID: \"e158a3a2-4367-4cfb-8d31-085f96d9dc6a\") " pod="service-telemetry/service-telemetry-operator-59bf6579cc-jbsp9" Feb 16 00:30:03 crc kubenswrapper[4698]: I0216 00:30:03.556804 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crzc4\" (UniqueName: \"kubernetes.io/projected/e158a3a2-4367-4cfb-8d31-085f96d9dc6a-kube-api-access-crzc4\") pod \"service-telemetry-operator-59bf6579cc-jbsp9\" (UID: \"e158a3a2-4367-4cfb-8d31-085f96d9dc6a\") " pod="service-telemetry/service-telemetry-operator-59bf6579cc-jbsp9" Feb 16 00:30:03 crc kubenswrapper[4698]: I0216 00:30:03.556845 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/e158a3a2-4367-4cfb-8d31-085f96d9dc6a-runner\") pod \"service-telemetry-operator-59bf6579cc-jbsp9\" (UID: \"e158a3a2-4367-4cfb-8d31-085f96d9dc6a\") " pod="service-telemetry/service-telemetry-operator-59bf6579cc-jbsp9" Feb 16 00:30:03 crc kubenswrapper[4698]: I0216 00:30:03.557343 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/e158a3a2-4367-4cfb-8d31-085f96d9dc6a-runner\") pod \"service-telemetry-operator-59bf6579cc-jbsp9\" (UID: \"e158a3a2-4367-4cfb-8d31-085f96d9dc6a\") " pod="service-telemetry/service-telemetry-operator-59bf6579cc-jbsp9" Feb 16 00:30:03 crc kubenswrapper[4698]: I0216 00:30:03.575597 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crzc4\" (UniqueName: \"kubernetes.io/projected/e158a3a2-4367-4cfb-8d31-085f96d9dc6a-kube-api-access-crzc4\") pod \"service-telemetry-operator-59bf6579cc-jbsp9\" (UID: \"e158a3a2-4367-4cfb-8d31-085f96d9dc6a\") " pod="service-telemetry/service-telemetry-operator-59bf6579cc-jbsp9" Feb 16 00:30:03 crc kubenswrapper[4698]: I0216 00:30:03.643250 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-59bf6579cc-jbsp9" Feb 16 00:30:03 crc kubenswrapper[4698]: I0216 00:30:03.813313 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-z5tg9" Feb 16 00:30:03 crc kubenswrapper[4698]: I0216 00:30:03.963393 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4501cd31-2161-4934-ae5a-4928392d1ed6-config-volume\") pod \"4501cd31-2161-4934-ae5a-4928392d1ed6\" (UID: \"4501cd31-2161-4934-ae5a-4928392d1ed6\") " Feb 16 00:30:03 crc kubenswrapper[4698]: I0216 00:30:03.963470 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfkql\" (UniqueName: \"kubernetes.io/projected/4501cd31-2161-4934-ae5a-4928392d1ed6-kube-api-access-xfkql\") pod \"4501cd31-2161-4934-ae5a-4928392d1ed6\" (UID: \"4501cd31-2161-4934-ae5a-4928392d1ed6\") " Feb 16 00:30:03 crc kubenswrapper[4698]: I0216 00:30:03.963581 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4501cd31-2161-4934-ae5a-4928392d1ed6-secret-volume\") pod \"4501cd31-2161-4934-ae5a-4928392d1ed6\" (UID: \"4501cd31-2161-4934-ae5a-4928392d1ed6\") " Feb 16 00:30:03 crc kubenswrapper[4698]: I0216 00:30:03.965182 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4501cd31-2161-4934-ae5a-4928392d1ed6-config-volume" (OuterVolumeSpecName: "config-volume") pod "4501cd31-2161-4934-ae5a-4928392d1ed6" (UID: "4501cd31-2161-4934-ae5a-4928392d1ed6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:30:03 crc kubenswrapper[4698]: I0216 00:30:03.967308 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4501cd31-2161-4934-ae5a-4928392d1ed6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4501cd31-2161-4934-ae5a-4928392d1ed6" (UID: "4501cd31-2161-4934-ae5a-4928392d1ed6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:30:03 crc kubenswrapper[4698]: I0216 00:30:03.969435 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4501cd31-2161-4934-ae5a-4928392d1ed6-kube-api-access-xfkql" (OuterVolumeSpecName: "kube-api-access-xfkql") pod "4501cd31-2161-4934-ae5a-4928392d1ed6" (UID: "4501cd31-2161-4934-ae5a-4928392d1ed6"). InnerVolumeSpecName "kube-api-access-xfkql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:30:04 crc kubenswrapper[4698]: I0216 00:30:04.064634 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4501cd31-2161-4934-ae5a-4928392d1ed6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 00:30:04 crc kubenswrapper[4698]: I0216 00:30:04.064672 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4501cd31-2161-4934-ae5a-4928392d1ed6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 00:30:04 crc kubenswrapper[4698]: I0216 00:30:04.064681 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfkql\" (UniqueName: \"kubernetes.io/projected/4501cd31-2161-4934-ae5a-4928392d1ed6-kube-api-access-xfkql\") on node \"crc\" DevicePath \"\"" Feb 16 00:30:04 crc kubenswrapper[4698]: I0216 00:30:04.256217 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-z5tg9" event={"ID":"4501cd31-2161-4934-ae5a-4928392d1ed6","Type":"ContainerDied","Data":"3d26bac62a6d088bc3607d4c00e89d1c4a35563b9ae2a8833dd01043de33d2e1"} Feb 16 00:30:04 crc kubenswrapper[4698]: I0216 00:30:04.256267 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d26bac62a6d088bc3607d4c00e89d1c4a35563b9ae2a8833dd01043de33d2e1" Feb 16 00:30:04 crc kubenswrapper[4698]: I0216 00:30:04.256308 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-z5tg9" Feb 16 00:30:09 crc kubenswrapper[4698]: I0216 00:30:09.455550 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-59bf6579cc-jbsp9"] Feb 16 00:30:13 crc kubenswrapper[4698]: E0216 00:30:13.024578 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/smart-gateway-operator:latest" Feb 16 00:30:13 crc kubenswrapper[4698]: E0216 00:30:13.025184 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/smart-gateway-operator:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:smart-gateway-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:ANSIBLE_VERBOSITY_SMARTGATEWAY_SMARTGATEWAY_INFRA_WATCH,Value:4,ValueFrom:nil,},EnvVar{Name:ANSIBLE_DEBUG_LOGS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CORE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BRIDGE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-bridge:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:smart-gateway-operator.v5.0.1771201793,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p69t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod smart-gateway-operator-6f55f6c4c5-56jtc_service-telemetry(b896217b-c297-4228-8940-d2e0a2f7547f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 00:30:13 crc kubenswrapper[4698]: E0216 00:30:13.026447 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/smart-gateway-operator-6f55f6c4c5-56jtc" podUID="b896217b-c297-4228-8940-d2e0a2f7547f" Feb 16 00:30:13 crc kubenswrapper[4698]: I0216 00:30:13.323369 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-59bf6579cc-jbsp9" event={"ID":"e158a3a2-4367-4cfb-8d31-085f96d9dc6a","Type":"ContainerStarted","Data":"9d51d3d3c3950d23f6913bbfea541ce67465c64cdbeb91d74b9aaf37594a0a31"} Feb 16 00:30:13 crc kubenswrapper[4698]: E0216 00:30:13.325681 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/smart-gateway-operator:latest\\\"\"" pod="service-telemetry/smart-gateway-operator-6f55f6c4c5-56jtc" podUID="b896217b-c297-4228-8940-d2e0a2f7547f" Feb 16 00:30:18 crc kubenswrapper[4698]: I0216 00:30:18.366965 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-59bf6579cc-jbsp9" event={"ID":"e158a3a2-4367-4cfb-8d31-085f96d9dc6a","Type":"ContainerStarted","Data":"d51e98291e13bda1246f2520211d96096d4509f45b8c124f521c7e2a15d78627"} Feb 16 00:30:18 crc kubenswrapper[4698]: I0216 00:30:18.414952 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-59bf6579cc-jbsp9" podStartSLOduration=10.419985128 podStartE2EDuration="15.414918797s" podCreationTimestamp="2026-02-16 00:30:03 +0000 UTC" firstStartedPulling="2026-02-16 00:30:12.521517829 +0000 UTC m=+1422.179416601" lastFinishedPulling="2026-02-16 00:30:17.516451508 +0000 UTC m=+1427.174350270" observedRunningTime="2026-02-16 00:30:18.401888801 +0000 UTC m=+1428.059787643" watchObservedRunningTime="2026-02-16 00:30:18.414918797 +0000 UTC m=+1428.072817599" Feb 16 00:30:27 crc kubenswrapper[4698]: I0216 00:30:27.046067 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:30:27 crc kubenswrapper[4698]: I0216 00:30:27.046847 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:30:27 crc kubenswrapper[4698]: I0216 00:30:27.435885 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-6f55f6c4c5-56jtc" event={"ID":"b896217b-c297-4228-8940-d2e0a2f7547f","Type":"ContainerStarted","Data":"601348107ce3be72620dbfd7b4306eccd7de10c259fe22925131794f5a14bc28"} Feb 16 00:30:27 crc kubenswrapper[4698]: I0216 00:30:27.476852 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-6f55f6c4c5-56jtc" podStartSLOduration=1.866723347 podStartE2EDuration="30.476825787s" podCreationTimestamp="2026-02-16 00:29:57 +0000 UTC" firstStartedPulling="2026-02-16 00:29:58.059145489 +0000 UTC m=+1407.717044261" lastFinishedPulling="2026-02-16 00:30:26.669247929 +0000 UTC m=+1436.327146701" observedRunningTime="2026-02-16 00:30:27.472895096 +0000 UTC m=+1437.130793888" watchObservedRunningTime="2026-02-16 00:30:27.476825787 +0000 UTC m=+1437.134724599" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.167077 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-p7772"] Feb 16 00:30:42 crc kubenswrapper[4698]: E0216 00:30:42.167842 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4501cd31-2161-4934-ae5a-4928392d1ed6" containerName="collect-profiles" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.167857 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4501cd31-2161-4934-ae5a-4928392d1ed6" containerName="collect-profiles" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.167998 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4501cd31-2161-4934-ae5a-4928392d1ed6" containerName="collect-profiles" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.168453 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.178935 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.179134 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.179187 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.179256 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-lnqst" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.179319 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.179392 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.179830 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.199769 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-p7772"] Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.253537 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnknj\" (UniqueName: \"kubernetes.io/projected/ffc063fe-f10a-45b0-9ad8-789923ddbbef-kube-api-access-jnknj\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.253579 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/ffc063fe-f10a-45b0-9ad8-789923ddbbef-sasl-config\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.253603 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.253660 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-sasl-users\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.253700 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.253716 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.253742 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.355052 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnknj\" (UniqueName: \"kubernetes.io/projected/ffc063fe-f10a-45b0-9ad8-789923ddbbef-kube-api-access-jnknj\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.355837 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/ffc063fe-f10a-45b0-9ad8-789923ddbbef-sasl-config\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.357595 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/ffc063fe-f10a-45b0-9ad8-789923ddbbef-sasl-config\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.357702 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.357747 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-sasl-users\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.357791 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.357813 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.358752 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.364965 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.365152 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.365248 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-sasl-users\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.365275 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.367074 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.378834 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnknj\" (UniqueName: \"kubernetes.io/projected/ffc063fe-f10a-45b0-9ad8-789923ddbbef-kube-api-access-jnknj\") pod \"default-interconnect-68864d46cb-p7772\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.490516 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:30:42 crc kubenswrapper[4698]: I0216 00:30:42.767224 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-p7772"] Feb 16 00:30:43 crc kubenswrapper[4698]: I0216 00:30:43.562876 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-p7772" event={"ID":"ffc063fe-f10a-45b0-9ad8-789923ddbbef","Type":"ContainerStarted","Data":"491308413a01a4e43292715ff3add13a8cc8d9b9f785fe162a6db17ad357b01b"} Feb 16 00:30:49 crc kubenswrapper[4698]: I0216 00:30:49.601720 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-p7772" event={"ID":"ffc063fe-f10a-45b0-9ad8-789923ddbbef","Type":"ContainerStarted","Data":"bb75c8accbe3bd9e99a333ae4a102dfb78729fe58cf2b322b3f6c7e429a59e57"} Feb 16 00:30:49 crc kubenswrapper[4698]: I0216 00:30:49.636846 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-p7772" podStartSLOduration=1.7931245169999999 podStartE2EDuration="7.636814869s" podCreationTimestamp="2026-02-16 00:30:42 +0000 UTC" firstStartedPulling="2026-02-16 00:30:42.786482744 +0000 UTC m=+1452.444381516" lastFinishedPulling="2026-02-16 00:30:48.630173106 +0000 UTC m=+1458.288071868" observedRunningTime="2026-02-16 00:30:49.628436329 +0000 UTC m=+1459.286335101" watchObservedRunningTime="2026-02-16 00:30:49.636814869 +0000 UTC m=+1459.294713671" Feb 16 00:30:52 crc kubenswrapper[4698]: I0216 00:30:52.926571 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 16 00:30:52 crc kubenswrapper[4698]: I0216 00:30:52.928581 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 16 00:30:52 crc kubenswrapper[4698]: I0216 00:30:52.930854 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Feb 16 00:30:52 crc kubenswrapper[4698]: I0216 00:30:52.931026 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Feb 16 00:30:52 crc kubenswrapper[4698]: I0216 00:30:52.940902 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Feb 16 00:30:52 crc kubenswrapper[4698]: I0216 00:30:52.941739 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-2" Feb 16 00:30:52 crc kubenswrapper[4698]: I0216 00:30:52.941862 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Feb 16 00:30:52 crc kubenswrapper[4698]: I0216 00:30:52.941985 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Feb 16 00:30:52 crc kubenswrapper[4698]: I0216 00:30:52.942085 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Feb 16 00:30:52 crc kubenswrapper[4698]: I0216 00:30:52.942309 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-1" Feb 16 00:30:52 crc kubenswrapper[4698]: I0216 00:30:52.942431 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Feb 16 00:30:52 crc kubenswrapper[4698]: I0216 00:30:52.942568 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-wd9hz" Feb 16 00:30:52 crc kubenswrapper[4698]: I0216 00:30:52.958224 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.161325 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/60f89283-c44b-475b-a87a-2471155ac745-web-config\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.161391 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/60f89283-c44b-475b-a87a-2471155ac745-config-out\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.161418 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60f89283-c44b-475b-a87a-2471155ac745-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.161496 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/60f89283-c44b-475b-a87a-2471155ac745-tls-assets\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.161533 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/60f89283-c44b-475b-a87a-2471155ac745-config\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.161554 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/60f89283-c44b-475b-a87a-2471155ac745-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.161578 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdkst\" (UniqueName: \"kubernetes.io/projected/60f89283-c44b-475b-a87a-2471155ac745-kube-api-access-zdkst\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.161605 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/60f89283-c44b-475b-a87a-2471155ac745-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.161647 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/60f89283-c44b-475b-a87a-2471155ac745-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.161866 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/60f89283-c44b-475b-a87a-2471155ac745-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.161944 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/60f89283-c44b-475b-a87a-2471155ac745-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.161982 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7ac127ca-dc99-421a-992d-9f804153df4a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ac127ca-dc99-421a-992d-9f804153df4a\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.264202 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/60f89283-c44b-475b-a87a-2471155ac745-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.264301 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/60f89283-c44b-475b-a87a-2471155ac745-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.264353 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7ac127ca-dc99-421a-992d-9f804153df4a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ac127ca-dc99-421a-992d-9f804153df4a\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.264436 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/60f89283-c44b-475b-a87a-2471155ac745-web-config\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.264490 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/60f89283-c44b-475b-a87a-2471155ac745-config-out\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.264550 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60f89283-c44b-475b-a87a-2471155ac745-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: E0216 00:30:53.264494 4698 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.264606 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/60f89283-c44b-475b-a87a-2471155ac745-tls-assets\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: E0216 00:30:53.264747 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60f89283-c44b-475b-a87a-2471155ac745-secret-default-prometheus-proxy-tls podName:60f89283-c44b-475b-a87a-2471155ac745 nodeName:}" failed. No retries permitted until 2026-02-16 00:30:53.764704835 +0000 UTC m=+1463.422603637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/60f89283-c44b-475b-a87a-2471155ac745-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "60f89283-c44b-475b-a87a-2471155ac745") : secret "default-prometheus-proxy-tls" not found Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.264799 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/60f89283-c44b-475b-a87a-2471155ac745-config\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.264862 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/60f89283-c44b-475b-a87a-2471155ac745-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.264909 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdkst\" (UniqueName: \"kubernetes.io/projected/60f89283-c44b-475b-a87a-2471155ac745-kube-api-access-zdkst\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.264963 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/60f89283-c44b-475b-a87a-2471155ac745-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.265014 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/60f89283-c44b-475b-a87a-2471155ac745-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.265640 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60f89283-c44b-475b-a87a-2471155ac745-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.265705 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/60f89283-c44b-475b-a87a-2471155ac745-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.266545 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/60f89283-c44b-475b-a87a-2471155ac745-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.267696 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/60f89283-c44b-475b-a87a-2471155ac745-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.268235 4698 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.268297 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7ac127ca-dc99-421a-992d-9f804153df4a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ac127ca-dc99-421a-992d-9f804153df4a\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7317519ab1cabea75233b168c365c7e59ffcd88728a7bb59b2b323167baa99a0/globalmount\"" pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.271555 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/60f89283-c44b-475b-a87a-2471155ac745-web-config\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.271756 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/60f89283-c44b-475b-a87a-2471155ac745-tls-assets\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.272727 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/60f89283-c44b-475b-a87a-2471155ac745-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.273200 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/60f89283-c44b-475b-a87a-2471155ac745-config\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.273895 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/60f89283-c44b-475b-a87a-2471155ac745-config-out\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.300123 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7ac127ca-dc99-421a-992d-9f804153df4a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ac127ca-dc99-421a-992d-9f804153df4a\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.303662 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdkst\" (UniqueName: \"kubernetes.io/projected/60f89283-c44b-475b-a87a-2471155ac745-kube-api-access-zdkst\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: I0216 00:30:53.772194 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/60f89283-c44b-475b-a87a-2471155ac745-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:53 crc kubenswrapper[4698]: E0216 00:30:53.772352 4698 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 16 00:30:53 crc kubenswrapper[4698]: E0216 00:30:53.773545 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60f89283-c44b-475b-a87a-2471155ac745-secret-default-prometheus-proxy-tls podName:60f89283-c44b-475b-a87a-2471155ac745 nodeName:}" failed. No retries permitted until 2026-02-16 00:30:54.773509377 +0000 UTC m=+1464.431408199 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/60f89283-c44b-475b-a87a-2471155ac745-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "60f89283-c44b-475b-a87a-2471155ac745") : secret "default-prometheus-proxy-tls" not found Feb 16 00:30:54 crc kubenswrapper[4698]: I0216 00:30:54.789851 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/60f89283-c44b-475b-a87a-2471155ac745-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:54 crc kubenswrapper[4698]: I0216 00:30:54.814771 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/60f89283-c44b-475b-a87a-2471155ac745-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"60f89283-c44b-475b-a87a-2471155ac745\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:30:54 crc kubenswrapper[4698]: I0216 00:30:54.880790 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 16 00:30:55 crc kubenswrapper[4698]: I0216 00:30:55.171865 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 16 00:30:55 crc kubenswrapper[4698]: I0216 00:30:55.657008 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"60f89283-c44b-475b-a87a-2471155ac745","Type":"ContainerStarted","Data":"bec975d4927c7402714fa39f3cb8f679f72faa368380f627541656442a655c8d"} Feb 16 00:30:57 crc kubenswrapper[4698]: I0216 00:30:57.045799 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:30:57 crc kubenswrapper[4698]: I0216 00:30:57.045893 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:31:00 crc kubenswrapper[4698]: I0216 00:31:00.700579 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"60f89283-c44b-475b-a87a-2471155ac745","Type":"ContainerStarted","Data":"8042c777062b7237f57d84b5fce3d7ac4cff5e5b71454c87c46696d899f92ef5"} Feb 16 00:31:03 crc kubenswrapper[4698]: I0216 00:31:03.614059 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-gtdjr"] Feb 16 00:31:03 crc kubenswrapper[4698]: I0216 00:31:03.616842 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-gtdjr" Feb 16 00:31:03 crc kubenswrapper[4698]: I0216 00:31:03.661457 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-gtdjr"] Feb 16 00:31:03 crc kubenswrapper[4698]: I0216 00:31:03.725508 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mzqs\" (UniqueName: \"kubernetes.io/projected/3137c83d-63b6-4a67-8ada-535b0b55ff6e-kube-api-access-7mzqs\") pod \"default-snmp-webhook-6856cfb745-gtdjr\" (UID: \"3137c83d-63b6-4a67-8ada-535b0b55ff6e\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-gtdjr" Feb 16 00:31:03 crc kubenswrapper[4698]: I0216 00:31:03.826670 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mzqs\" (UniqueName: \"kubernetes.io/projected/3137c83d-63b6-4a67-8ada-535b0b55ff6e-kube-api-access-7mzqs\") pod \"default-snmp-webhook-6856cfb745-gtdjr\" (UID: \"3137c83d-63b6-4a67-8ada-535b0b55ff6e\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-gtdjr" Feb 16 00:31:03 crc kubenswrapper[4698]: I0216 00:31:03.859401 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mzqs\" (UniqueName: \"kubernetes.io/projected/3137c83d-63b6-4a67-8ada-535b0b55ff6e-kube-api-access-7mzqs\") pod \"default-snmp-webhook-6856cfb745-gtdjr\" (UID: \"3137c83d-63b6-4a67-8ada-535b0b55ff6e\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-gtdjr" Feb 16 00:31:03 crc kubenswrapper[4698]: I0216 00:31:03.972415 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-gtdjr" Feb 16 00:31:04 crc kubenswrapper[4698]: I0216 00:31:04.420738 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-gtdjr"] Feb 16 00:31:04 crc kubenswrapper[4698]: I0216 00:31:04.729303 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-gtdjr" event={"ID":"3137c83d-63b6-4a67-8ada-535b0b55ff6e","Type":"ContainerStarted","Data":"ebb4f49c4025a665793da18336a687385aae22b3d6a36bcd43d03fe55f55ee5d"} Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.151128 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.152783 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.155602 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.155735 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.155819 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.155948 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.156044 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-77mlg" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.156204 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.167012 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.270410 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-web-config\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.270656 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-config-volume\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.270703 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.270761 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e347bc90-f8d8-4f18-8ab7-f3f5685703d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e347bc90-f8d8-4f18-8ab7-f3f5685703d4\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.270781 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.270839 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e7998f1-4de5-475f-9c43-a9beba750f02-config-out\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.270857 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.270976 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e7998f1-4de5-475f-9c43-a9beba750f02-tls-assets\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.271002 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmsnl\" (UniqueName: \"kubernetes.io/projected/1e7998f1-4de5-475f-9c43-a9beba750f02-kube-api-access-kmsnl\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.372884 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-web-config\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.373226 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-config-volume\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.373257 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.373287 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.373306 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e347bc90-f8d8-4f18-8ab7-f3f5685703d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e347bc90-f8d8-4f18-8ab7-f3f5685703d4\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.373335 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e7998f1-4de5-475f-9c43-a9beba750f02-config-out\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.373351 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.374377 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e7998f1-4de5-475f-9c43-a9beba750f02-tls-assets\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: E0216 00:31:07.373696 4698 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 16 00:31:07 crc kubenswrapper[4698]: E0216 00:31:07.374505 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-secret-default-alertmanager-proxy-tls podName:1e7998f1-4de5-475f-9c43-a9beba750f02 nodeName:}" failed. No retries permitted until 2026-02-16 00:31:07.874485965 +0000 UTC m=+1477.532384727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "1e7998f1-4de5-475f-9c43-a9beba750f02") : secret "default-alertmanager-proxy-tls" not found Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.375230 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmsnl\" (UniqueName: \"kubernetes.io/projected/1e7998f1-4de5-475f-9c43-a9beba750f02-kube-api-access-kmsnl\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.379493 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e7998f1-4de5-475f-9c43-a9beba750f02-tls-assets\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.381291 4698 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.381327 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e347bc90-f8d8-4f18-8ab7-f3f5685703d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e347bc90-f8d8-4f18-8ab7-f3f5685703d4\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ff738b80ce152a3ed990383c7920c9d10bdb1cd12c0867b44a805fd063b09ebf/globalmount\"" pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.385048 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-config-volume\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.385689 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.388161 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-web-config\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.394587 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e7998f1-4de5-475f-9c43-a9beba750f02-config-out\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.395189 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmsnl\" (UniqueName: \"kubernetes.io/projected/1e7998f1-4de5-475f-9c43-a9beba750f02-kube-api-access-kmsnl\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.397949 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.422523 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e347bc90-f8d8-4f18-8ab7-f3f5685703d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e347bc90-f8d8-4f18-8ab7-f3f5685703d4\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.753223 4698 generic.go:334] "Generic (PLEG): container finished" podID="60f89283-c44b-475b-a87a-2471155ac745" containerID="8042c777062b7237f57d84b5fce3d7ac4cff5e5b71454c87c46696d899f92ef5" exitCode=0 Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.753264 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"60f89283-c44b-475b-a87a-2471155ac745","Type":"ContainerDied","Data":"8042c777062b7237f57d84b5fce3d7ac4cff5e5b71454c87c46696d899f92ef5"} Feb 16 00:31:07 crc kubenswrapper[4698]: I0216 00:31:07.883269 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:07 crc kubenswrapper[4698]: E0216 00:31:07.884124 4698 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 16 00:31:07 crc kubenswrapper[4698]: E0216 00:31:07.884182 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-secret-default-alertmanager-proxy-tls podName:1e7998f1-4de5-475f-9c43-a9beba750f02 nodeName:}" failed. No retries permitted until 2026-02-16 00:31:08.884166084 +0000 UTC m=+1478.542064846 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "1e7998f1-4de5-475f-9c43-a9beba750f02") : secret "default-alertmanager-proxy-tls" not found Feb 16 00:31:08 crc kubenswrapper[4698]: I0216 00:31:08.897601 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:08 crc kubenswrapper[4698]: E0216 00:31:08.897911 4698 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 16 00:31:08 crc kubenswrapper[4698]: E0216 00:31:08.897973 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-secret-default-alertmanager-proxy-tls podName:1e7998f1-4de5-475f-9c43-a9beba750f02 nodeName:}" failed. No retries permitted until 2026-02-16 00:31:10.89795438 +0000 UTC m=+1480.555853152 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "1e7998f1-4de5-475f-9c43-a9beba750f02") : secret "default-alertmanager-proxy-tls" not found Feb 16 00:31:10 crc kubenswrapper[4698]: I0216 00:31:10.928518 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:10 crc kubenswrapper[4698]: I0216 00:31:10.935969 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e7998f1-4de5-475f-9c43-a9beba750f02-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"1e7998f1-4de5-475f-9c43-a9beba750f02\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:11 crc kubenswrapper[4698]: I0216 00:31:11.113251 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 16 00:31:12 crc kubenswrapper[4698]: I0216 00:31:12.301955 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 16 00:31:12 crc kubenswrapper[4698]: I0216 00:31:12.823349 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-gtdjr" event={"ID":"3137c83d-63b6-4a67-8ada-535b0b55ff6e","Type":"ContainerStarted","Data":"60a8e04dfa7ca11380bad3b5e44222ae0b6366728b427dfa94763c9ad2cdb71b"} Feb 16 00:31:12 crc kubenswrapper[4698]: I0216 00:31:12.824478 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"1e7998f1-4de5-475f-9c43-a9beba750f02","Type":"ContainerStarted","Data":"b48a3ad535d1deb2c44544d69176f119d2ddb85e3aba8ec711ba93f382c72cf1"} Feb 16 00:31:12 crc kubenswrapper[4698]: I0216 00:31:12.839946 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6856cfb745-gtdjr" podStartSLOduration=2.189251929 podStartE2EDuration="9.839930948s" podCreationTimestamp="2026-02-16 00:31:03 +0000 UTC" firstStartedPulling="2026-02-16 00:31:04.434070851 +0000 UTC m=+1474.091969613" lastFinishedPulling="2026-02-16 00:31:12.08474987 +0000 UTC m=+1481.742648632" observedRunningTime="2026-02-16 00:31:12.838568056 +0000 UTC m=+1482.496466818" watchObservedRunningTime="2026-02-16 00:31:12.839930948 +0000 UTC m=+1482.497829710" Feb 16 00:31:14 crc kubenswrapper[4698]: I0216 00:31:14.838500 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"1e7998f1-4de5-475f-9c43-a9beba750f02","Type":"ContainerStarted","Data":"56155c7acbe52e572429ea6c4c8bcf6cd1193142641a46fafda1d6c1c9d346b7"} Feb 16 00:31:16 crc kubenswrapper[4698]: I0216 00:31:16.860717 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"60f89283-c44b-475b-a87a-2471155ac745","Type":"ContainerStarted","Data":"4af69759e76239577cf07fe3b93a42a863014d33df7b029b14e0f85a28b27555"} Feb 16 00:31:18 crc kubenswrapper[4698]: I0216 00:31:18.877949 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"60f89283-c44b-475b-a87a-2471155ac745","Type":"ContainerStarted","Data":"971a4e3475ab116a1dbf4b14d44afb0cf28233d29fb82a396a8b203912f2daa2"} Feb 16 00:31:20 crc kubenswrapper[4698]: I0216 00:31:20.878677 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf"] Feb 16 00:31:20 crc kubenswrapper[4698]: I0216 00:31:20.879797 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" Feb 16 00:31:20 crc kubenswrapper[4698]: I0216 00:31:20.882062 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Feb 16 00:31:20 crc kubenswrapper[4698]: I0216 00:31:20.882062 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Feb 16 00:31:20 crc kubenswrapper[4698]: I0216 00:31:20.882336 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Feb 16 00:31:20 crc kubenswrapper[4698]: I0216 00:31:20.884223 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-lx7q6" Feb 16 00:31:20 crc kubenswrapper[4698]: I0216 00:31:20.893816 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf"] Feb 16 00:31:21 crc kubenswrapper[4698]: I0216 00:31:21.006777 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8de7b9dc-c355-4830-a8c4-397a66fea53b-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf\" (UID: \"8de7b9dc-c355-4830-a8c4-397a66fea53b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" Feb 16 00:31:21 crc kubenswrapper[4698]: I0216 00:31:21.006848 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8de7b9dc-c355-4830-a8c4-397a66fea53b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf\" (UID: \"8de7b9dc-c355-4830-a8c4-397a66fea53b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" Feb 16 00:31:21 crc kubenswrapper[4698]: I0216 00:31:21.006930 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8de7b9dc-c355-4830-a8c4-397a66fea53b-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf\" (UID: \"8de7b9dc-c355-4830-a8c4-397a66fea53b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" Feb 16 00:31:21 crc kubenswrapper[4698]: I0216 00:31:21.007070 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd7xh\" (UniqueName: \"kubernetes.io/projected/8de7b9dc-c355-4830-a8c4-397a66fea53b-kube-api-access-kd7xh\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf\" (UID: \"8de7b9dc-c355-4830-a8c4-397a66fea53b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" Feb 16 00:31:21 crc kubenswrapper[4698]: I0216 00:31:21.007230 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/8de7b9dc-c355-4830-a8c4-397a66fea53b-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf\" (UID: \"8de7b9dc-c355-4830-a8c4-397a66fea53b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" Feb 16 00:31:21 crc kubenswrapper[4698]: I0216 00:31:21.108290 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd7xh\" (UniqueName: \"kubernetes.io/projected/8de7b9dc-c355-4830-a8c4-397a66fea53b-kube-api-access-kd7xh\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf\" (UID: \"8de7b9dc-c355-4830-a8c4-397a66fea53b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" Feb 16 00:31:21 crc kubenswrapper[4698]: I0216 00:31:21.108396 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/8de7b9dc-c355-4830-a8c4-397a66fea53b-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf\" (UID: \"8de7b9dc-c355-4830-a8c4-397a66fea53b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" Feb 16 00:31:21 crc kubenswrapper[4698]: I0216 00:31:21.108445 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8de7b9dc-c355-4830-a8c4-397a66fea53b-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf\" (UID: \"8de7b9dc-c355-4830-a8c4-397a66fea53b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" Feb 16 00:31:21 crc kubenswrapper[4698]: I0216 00:31:21.108473 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8de7b9dc-c355-4830-a8c4-397a66fea53b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf\" (UID: \"8de7b9dc-c355-4830-a8c4-397a66fea53b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" Feb 16 00:31:21 crc kubenswrapper[4698]: I0216 00:31:21.108523 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8de7b9dc-c355-4830-a8c4-397a66fea53b-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf\" (UID: \"8de7b9dc-c355-4830-a8c4-397a66fea53b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" Feb 16 00:31:21 crc kubenswrapper[4698]: I0216 00:31:21.109210 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8de7b9dc-c355-4830-a8c4-397a66fea53b-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf\" (UID: \"8de7b9dc-c355-4830-a8c4-397a66fea53b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" Feb 16 00:31:21 crc kubenswrapper[4698]: E0216 00:31:21.109327 4698 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 16 00:31:21 crc kubenswrapper[4698]: E0216 00:31:21.109386 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8de7b9dc-c355-4830-a8c4-397a66fea53b-default-cloud1-coll-meter-proxy-tls podName:8de7b9dc-c355-4830-a8c4-397a66fea53b nodeName:}" failed. No retries permitted until 2026-02-16 00:31:21.609367071 +0000 UTC m=+1491.267265833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/8de7b9dc-c355-4830-a8c4-397a66fea53b-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" (UID: "8de7b9dc-c355-4830-a8c4-397a66fea53b") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 16 00:31:21 crc kubenswrapper[4698]: I0216 00:31:21.109605 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8de7b9dc-c355-4830-a8c4-397a66fea53b-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf\" (UID: \"8de7b9dc-c355-4830-a8c4-397a66fea53b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" Feb 16 00:31:21 crc kubenswrapper[4698]: I0216 00:31:21.125020 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/8de7b9dc-c355-4830-a8c4-397a66fea53b-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf\" (UID: \"8de7b9dc-c355-4830-a8c4-397a66fea53b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" Feb 16 00:31:21 crc kubenswrapper[4698]: I0216 00:31:21.125175 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd7xh\" (UniqueName: \"kubernetes.io/projected/8de7b9dc-c355-4830-a8c4-397a66fea53b-kube-api-access-kd7xh\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf\" (UID: \"8de7b9dc-c355-4830-a8c4-397a66fea53b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" Feb 16 00:31:21 crc kubenswrapper[4698]: I0216 00:31:21.617334 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8de7b9dc-c355-4830-a8c4-397a66fea53b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf\" (UID: \"8de7b9dc-c355-4830-a8c4-397a66fea53b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" Feb 16 00:31:21 crc kubenswrapper[4698]: E0216 00:31:21.617652 4698 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 16 00:31:21 crc kubenswrapper[4698]: E0216 00:31:21.617713 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8de7b9dc-c355-4830-a8c4-397a66fea53b-default-cloud1-coll-meter-proxy-tls podName:8de7b9dc-c355-4830-a8c4-397a66fea53b nodeName:}" failed. No retries permitted until 2026-02-16 00:31:22.617695867 +0000 UTC m=+1492.275594639 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/8de7b9dc-c355-4830-a8c4-397a66fea53b-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" (UID: "8de7b9dc-c355-4830-a8c4-397a66fea53b") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 16 00:31:22 crc kubenswrapper[4698]: I0216 00:31:22.629843 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8de7b9dc-c355-4830-a8c4-397a66fea53b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf\" (UID: \"8de7b9dc-c355-4830-a8c4-397a66fea53b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" Feb 16 00:31:22 crc kubenswrapper[4698]: I0216 00:31:22.644807 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8de7b9dc-c355-4830-a8c4-397a66fea53b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf\" (UID: \"8de7b9dc-c355-4830-a8c4-397a66fea53b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" Feb 16 00:31:22 crc kubenswrapper[4698]: I0216 00:31:22.706703 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" Feb 16 00:31:22 crc kubenswrapper[4698]: I0216 00:31:22.908549 4698 generic.go:334] "Generic (PLEG): container finished" podID="1e7998f1-4de5-475f-9c43-a9beba750f02" containerID="56155c7acbe52e572429ea6c4c8bcf6cd1193142641a46fafda1d6c1c9d346b7" exitCode=0 Feb 16 00:31:22 crc kubenswrapper[4698]: I0216 00:31:22.908595 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"1e7998f1-4de5-475f-9c43-a9beba750f02","Type":"ContainerDied","Data":"56155c7acbe52e572429ea6c4c8bcf6cd1193142641a46fafda1d6c1c9d346b7"} Feb 16 00:31:23 crc kubenswrapper[4698]: I0216 00:31:23.356596 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj"] Feb 16 00:31:23 crc kubenswrapper[4698]: I0216 00:31:23.357918 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" Feb 16 00:31:23 crc kubenswrapper[4698]: I0216 00:31:23.368512 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj"] Feb 16 00:31:23 crc kubenswrapper[4698]: I0216 00:31:23.370434 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Feb 16 00:31:23 crc kubenswrapper[4698]: I0216 00:31:23.404183 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Feb 16 00:31:23 crc kubenswrapper[4698]: I0216 00:31:23.439601 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ee87103e-a39d-4f01-9843-26056d8805a6-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj\" (UID: \"ee87103e-a39d-4f01-9843-26056d8805a6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" Feb 16 00:31:23 crc kubenswrapper[4698]: I0216 00:31:23.439674 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ee87103e-a39d-4f01-9843-26056d8805a6-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj\" (UID: \"ee87103e-a39d-4f01-9843-26056d8805a6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" Feb 16 00:31:23 crc kubenswrapper[4698]: I0216 00:31:23.439719 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl5gt\" (UniqueName: \"kubernetes.io/projected/ee87103e-a39d-4f01-9843-26056d8805a6-kube-api-access-bl5gt\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj\" (UID: \"ee87103e-a39d-4f01-9843-26056d8805a6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" Feb 16 00:31:23 crc kubenswrapper[4698]: I0216 00:31:23.439875 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee87103e-a39d-4f01-9843-26056d8805a6-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj\" (UID: \"ee87103e-a39d-4f01-9843-26056d8805a6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" Feb 16 00:31:23 crc kubenswrapper[4698]: I0216 00:31:23.439972 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/ee87103e-a39d-4f01-9843-26056d8805a6-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj\" (UID: \"ee87103e-a39d-4f01-9843-26056d8805a6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" Feb 16 00:31:23 crc kubenswrapper[4698]: I0216 00:31:23.541112 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee87103e-a39d-4f01-9843-26056d8805a6-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj\" (UID: \"ee87103e-a39d-4f01-9843-26056d8805a6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" Feb 16 00:31:23 crc kubenswrapper[4698]: I0216 00:31:23.541193 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/ee87103e-a39d-4f01-9843-26056d8805a6-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj\" (UID: \"ee87103e-a39d-4f01-9843-26056d8805a6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" Feb 16 00:31:23 crc kubenswrapper[4698]: I0216 00:31:23.541244 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ee87103e-a39d-4f01-9843-26056d8805a6-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj\" (UID: \"ee87103e-a39d-4f01-9843-26056d8805a6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" Feb 16 00:31:23 crc kubenswrapper[4698]: I0216 00:31:23.541272 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ee87103e-a39d-4f01-9843-26056d8805a6-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj\" (UID: \"ee87103e-a39d-4f01-9843-26056d8805a6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" Feb 16 00:31:23 crc kubenswrapper[4698]: I0216 00:31:23.541323 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl5gt\" (UniqueName: \"kubernetes.io/projected/ee87103e-a39d-4f01-9843-26056d8805a6-kube-api-access-bl5gt\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj\" (UID: \"ee87103e-a39d-4f01-9843-26056d8805a6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" Feb 16 00:31:23 crc kubenswrapper[4698]: E0216 00:31:23.541343 4698 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 16 00:31:23 crc kubenswrapper[4698]: E0216 00:31:23.541427 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee87103e-a39d-4f01-9843-26056d8805a6-default-cloud1-ceil-meter-proxy-tls podName:ee87103e-a39d-4f01-9843-26056d8805a6 nodeName:}" failed. No retries permitted until 2026-02-16 00:31:24.041403976 +0000 UTC m=+1493.699302818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/ee87103e-a39d-4f01-9843-26056d8805a6-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" (UID: "ee87103e-a39d-4f01-9843-26056d8805a6") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 16 00:31:23 crc kubenswrapper[4698]: I0216 00:31:23.541849 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ee87103e-a39d-4f01-9843-26056d8805a6-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj\" (UID: \"ee87103e-a39d-4f01-9843-26056d8805a6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" Feb 16 00:31:23 crc kubenswrapper[4698]: I0216 00:31:23.542306 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ee87103e-a39d-4f01-9843-26056d8805a6-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj\" (UID: \"ee87103e-a39d-4f01-9843-26056d8805a6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" Feb 16 00:31:23 crc kubenswrapper[4698]: I0216 00:31:23.553445 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/ee87103e-a39d-4f01-9843-26056d8805a6-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj\" (UID: \"ee87103e-a39d-4f01-9843-26056d8805a6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" Feb 16 00:31:23 crc kubenswrapper[4698]: I0216 00:31:23.561386 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl5gt\" (UniqueName: \"kubernetes.io/projected/ee87103e-a39d-4f01-9843-26056d8805a6-kube-api-access-bl5gt\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj\" (UID: \"ee87103e-a39d-4f01-9843-26056d8805a6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" Feb 16 00:31:24 crc kubenswrapper[4698]: I0216 00:31:24.048267 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee87103e-a39d-4f01-9843-26056d8805a6-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj\" (UID: \"ee87103e-a39d-4f01-9843-26056d8805a6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" Feb 16 00:31:24 crc kubenswrapper[4698]: E0216 00:31:24.048414 4698 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 16 00:31:24 crc kubenswrapper[4698]: E0216 00:31:24.048605 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee87103e-a39d-4f01-9843-26056d8805a6-default-cloud1-ceil-meter-proxy-tls podName:ee87103e-a39d-4f01-9843-26056d8805a6 nodeName:}" failed. No retries permitted until 2026-02-16 00:31:25.048583277 +0000 UTC m=+1494.706482059 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/ee87103e-a39d-4f01-9843-26056d8805a6-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" (UID: "ee87103e-a39d-4f01-9843-26056d8805a6") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 16 00:31:25 crc kubenswrapper[4698]: I0216 00:31:25.062372 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee87103e-a39d-4f01-9843-26056d8805a6-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj\" (UID: \"ee87103e-a39d-4f01-9843-26056d8805a6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" Feb 16 00:31:25 crc kubenswrapper[4698]: I0216 00:31:25.067635 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee87103e-a39d-4f01-9843-26056d8805a6-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj\" (UID: \"ee87103e-a39d-4f01-9843-26056d8805a6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" Feb 16 00:31:25 crc kubenswrapper[4698]: I0216 00:31:25.207082 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.046131 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.046483 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.046535 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.047143 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf"} pod="openshift-machine-config-operator/machine-config-daemon-z56m2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.047192 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" containerID="cri-o://6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" gracePeriod=600 Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.124272 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn"] Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.127002 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.135784 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn"] Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.135990 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.136013 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.195344 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/08930fbb-e669-42c6-a2b1-e36b32415a75-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-779sn\" (UID: \"08930fbb-e669-42c6-a2b1-e36b32415a75\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.195398 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/08930fbb-e669-42c6-a2b1-e36b32415a75-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-779sn\" (UID: \"08930fbb-e669-42c6-a2b1-e36b32415a75\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.195442 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/08930fbb-e669-42c6-a2b1-e36b32415a75-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-779sn\" (UID: \"08930fbb-e669-42c6-a2b1-e36b32415a75\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.195461 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/08930fbb-e669-42c6-a2b1-e36b32415a75-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-779sn\" (UID: \"08930fbb-e669-42c6-a2b1-e36b32415a75\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.195486 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prrq7\" (UniqueName: \"kubernetes.io/projected/08930fbb-e669-42c6-a2b1-e36b32415a75-kube-api-access-prrq7\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-779sn\" (UID: \"08930fbb-e669-42c6-a2b1-e36b32415a75\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.296609 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/08930fbb-e669-42c6-a2b1-e36b32415a75-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-779sn\" (UID: \"08930fbb-e669-42c6-a2b1-e36b32415a75\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.296700 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/08930fbb-e669-42c6-a2b1-e36b32415a75-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-779sn\" (UID: \"08930fbb-e669-42c6-a2b1-e36b32415a75\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.296765 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/08930fbb-e669-42c6-a2b1-e36b32415a75-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-779sn\" (UID: \"08930fbb-e669-42c6-a2b1-e36b32415a75\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.296796 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/08930fbb-e669-42c6-a2b1-e36b32415a75-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-779sn\" (UID: \"08930fbb-e669-42c6-a2b1-e36b32415a75\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.296823 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prrq7\" (UniqueName: \"kubernetes.io/projected/08930fbb-e669-42c6-a2b1-e36b32415a75-kube-api-access-prrq7\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-779sn\" (UID: \"08930fbb-e669-42c6-a2b1-e36b32415a75\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" Feb 16 00:31:27 crc kubenswrapper[4698]: E0216 00:31:27.297584 4698 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 16 00:31:27 crc kubenswrapper[4698]: E0216 00:31:27.297680 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08930fbb-e669-42c6-a2b1-e36b32415a75-default-cloud1-sens-meter-proxy-tls podName:08930fbb-e669-42c6-a2b1-e36b32415a75 nodeName:}" failed. No retries permitted until 2026-02-16 00:31:27.797661956 +0000 UTC m=+1497.455560718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/08930fbb-e669-42c6-a2b1-e36b32415a75-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" (UID: "08930fbb-e669-42c6-a2b1-e36b32415a75") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.297838 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/08930fbb-e669-42c6-a2b1-e36b32415a75-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-779sn\" (UID: \"08930fbb-e669-42c6-a2b1-e36b32415a75\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.297908 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/08930fbb-e669-42c6-a2b1-e36b32415a75-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-779sn\" (UID: \"08930fbb-e669-42c6-a2b1-e36b32415a75\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.305706 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/08930fbb-e669-42c6-a2b1-e36b32415a75-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-779sn\" (UID: \"08930fbb-e669-42c6-a2b1-e36b32415a75\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.316422 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prrq7\" (UniqueName: \"kubernetes.io/projected/08930fbb-e669-42c6-a2b1-e36b32415a75-kube-api-access-prrq7\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-779sn\" (UID: \"08930fbb-e669-42c6-a2b1-e36b32415a75\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" Feb 16 00:31:27 crc kubenswrapper[4698]: E0216 00:31:27.585121 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.803586 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/08930fbb-e669-42c6-a2b1-e36b32415a75-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-779sn\" (UID: \"08930fbb-e669-42c6-a2b1-e36b32415a75\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" Feb 16 00:31:27 crc kubenswrapper[4698]: E0216 00:31:27.803789 4698 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 16 00:31:27 crc kubenswrapper[4698]: E0216 00:31:27.803881 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08930fbb-e669-42c6-a2b1-e36b32415a75-default-cloud1-sens-meter-proxy-tls podName:08930fbb-e669-42c6-a2b1-e36b32415a75 nodeName:}" failed. No retries permitted until 2026-02-16 00:31:28.803864497 +0000 UTC m=+1498.461763259 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/08930fbb-e669-42c6-a2b1-e36b32415a75-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" (UID: "08930fbb-e669-42c6-a2b1-e36b32415a75") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.943204 4698 generic.go:334] "Generic (PLEG): container finished" podID="7b351654-277f-4d0d-84f9-b003f934936c" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" exitCode=0 Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.943256 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" event={"ID":"7b351654-277f-4d0d-84f9-b003f934936c","Type":"ContainerDied","Data":"6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf"} Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.943316 4698 scope.go:117] "RemoveContainer" containerID="8920ac12f28a10d0002a39e030cf4a986f53bbff6cab4c65dd53e3307065853f" Feb 16 00:31:27 crc kubenswrapper[4698]: I0216 00:31:27.944232 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:31:27 crc kubenswrapper[4698]: E0216 00:31:27.944512 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:31:28 crc kubenswrapper[4698]: I0216 00:31:28.782317 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf"] Feb 16 00:31:28 crc kubenswrapper[4698]: I0216 00:31:28.822455 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/08930fbb-e669-42c6-a2b1-e36b32415a75-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-779sn\" (UID: \"08930fbb-e669-42c6-a2b1-e36b32415a75\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" Feb 16 00:31:28 crc kubenswrapper[4698]: I0216 00:31:28.827978 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/08930fbb-e669-42c6-a2b1-e36b32415a75-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-779sn\" (UID: \"08930fbb-e669-42c6-a2b1-e36b32415a75\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" Feb 16 00:31:28 crc kubenswrapper[4698]: I0216 00:31:28.923427 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj"] Feb 16 00:31:28 crc kubenswrapper[4698]: I0216 00:31:28.951351 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"60f89283-c44b-475b-a87a-2471155ac745","Type":"ContainerStarted","Data":"04fc4e8c27f7da648656a04f457189afb0a0f473c060ac90daf6bcc6147d84d8"} Feb 16 00:31:28 crc kubenswrapper[4698]: I0216 00:31:28.954270 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" Feb 16 00:31:29 crc kubenswrapper[4698]: W0216 00:31:29.370204 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8de7b9dc_c355_4830_a8c4_397a66fea53b.slice/crio-71d6bbf89b7891e109ebe2f1d8ee84f0cd8fc4f1ddf19e7cd5acf092cb2acae6 WatchSource:0}: Error finding container 71d6bbf89b7891e109ebe2f1d8ee84f0cd8fc4f1ddf19e7cd5acf092cb2acae6: Status 404 returned error can't find the container with id 71d6bbf89b7891e109ebe2f1d8ee84f0cd8fc4f1ddf19e7cd5acf092cb2acae6 Feb 16 00:31:29 crc kubenswrapper[4698]: W0216 00:31:29.373603 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee87103e_a39d_4f01_9843_26056d8805a6.slice/crio-9735af82a09612e4a509fb0eeb2d7822df480f360fb3f95f28daf06dd0593018 WatchSource:0}: Error finding container 9735af82a09612e4a509fb0eeb2d7822df480f360fb3f95f28daf06dd0593018: Status 404 returned error can't find the container with id 9735af82a09612e4a509fb0eeb2d7822df480f360fb3f95f28daf06dd0593018 Feb 16 00:31:29 crc kubenswrapper[4698]: I0216 00:31:29.881308 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Feb 16 00:31:29 crc kubenswrapper[4698]: I0216 00:31:29.976376 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" event={"ID":"8de7b9dc-c355-4830-a8c4-397a66fea53b","Type":"ContainerStarted","Data":"71d6bbf89b7891e109ebe2f1d8ee84f0cd8fc4f1ddf19e7cd5acf092cb2acae6"} Feb 16 00:31:29 crc kubenswrapper[4698]: I0216 00:31:29.979720 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" event={"ID":"ee87103e-a39d-4f01-9843-26056d8805a6","Type":"ContainerStarted","Data":"9735af82a09612e4a509fb0eeb2d7822df480f360fb3f95f28daf06dd0593018"} Feb 16 00:31:30 crc kubenswrapper[4698]: I0216 00:31:30.087111 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=5.730786796 podStartE2EDuration="39.087094262s" podCreationTimestamp="2026-02-16 00:30:51 +0000 UTC" firstStartedPulling="2026-02-16 00:30:55.193839012 +0000 UTC m=+1464.851737774" lastFinishedPulling="2026-02-16 00:31:28.550146478 +0000 UTC m=+1498.208045240" observedRunningTime="2026-02-16 00:31:28.979908901 +0000 UTC m=+1498.637807663" watchObservedRunningTime="2026-02-16 00:31:30.087094262 +0000 UTC m=+1499.744993024" Feb 16 00:31:30 crc kubenswrapper[4698]: I0216 00:31:30.087887 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn"] Feb 16 00:31:30 crc kubenswrapper[4698]: I0216 00:31:30.989442 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" event={"ID":"8de7b9dc-c355-4830-a8c4-397a66fea53b","Type":"ContainerStarted","Data":"be32ee769239b960d00920cf80e1dc5a36d4916dbfc2111b3323b4a951a8330d"} Feb 16 00:31:30 crc kubenswrapper[4698]: I0216 00:31:30.991630 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" event={"ID":"ee87103e-a39d-4f01-9843-26056d8805a6","Type":"ContainerStarted","Data":"d4663332c232566ece787a0e5f7b822fedb1b8f4bd9a7c9fffdbca7850dba5f9"} Feb 16 00:31:30 crc kubenswrapper[4698]: I0216 00:31:30.993975 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" event={"ID":"08930fbb-e669-42c6-a2b1-e36b32415a75","Type":"ContainerStarted","Data":"bd7e391d495815fbd3665997f00d5c4ec7605c2f459ac3f6134073bf02893791"} Feb 16 00:31:30 crc kubenswrapper[4698]: I0216 00:31:30.994020 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" event={"ID":"08930fbb-e669-42c6-a2b1-e36b32415a75","Type":"ContainerStarted","Data":"a95db0ec9bd6e5301dbf7ba2bf35cfbdbadbb6b265ffb80ff3c5a86fe36f782d"} Feb 16 00:31:30 crc kubenswrapper[4698]: I0216 00:31:30.995595 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"1e7998f1-4de5-475f-9c43-a9beba750f02","Type":"ContainerStarted","Data":"eaa3f52e1e65cef767ca28877cefc27fbfadc7e959a00e3e41da997915ae310a"} Feb 16 00:31:32 crc kubenswrapper[4698]: I0216 00:31:32.004656 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" event={"ID":"08930fbb-e669-42c6-a2b1-e36b32415a75","Type":"ContainerStarted","Data":"2853aab95d3937494677732723193752f97b47f333cd41792efe78d0254656d3"} Feb 16 00:31:32 crc kubenswrapper[4698]: I0216 00:31:32.007358 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" event={"ID":"8de7b9dc-c355-4830-a8c4-397a66fea53b","Type":"ContainerStarted","Data":"8e7864cccd11a21bda1c672034a8e5a93ebacbeb7469668e2f0e760fd5dff3e8"} Feb 16 00:31:32 crc kubenswrapper[4698]: I0216 00:31:32.009225 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" event={"ID":"ee87103e-a39d-4f01-9843-26056d8805a6","Type":"ContainerStarted","Data":"3d2db9deb33a2f753514506e320001dfd62aff36e2319fd726e5d426929924b5"} Feb 16 00:31:33 crc kubenswrapper[4698]: I0216 00:31:33.020342 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"1e7998f1-4de5-475f-9c43-a9beba750f02","Type":"ContainerStarted","Data":"dbb70448e76e67df4e4555c8b9dc8e8f7ef0b53095cae2d30abc9bba3f37eb5c"} Feb 16 00:31:33 crc kubenswrapper[4698]: I0216 00:31:33.020385 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"1e7998f1-4de5-475f-9c43-a9beba750f02","Type":"ContainerStarted","Data":"2ff84edb149bd5761992f4034d3debae616044bedbf04570f43b103f28d5fed4"} Feb 16 00:31:33 crc kubenswrapper[4698]: I0216 00:31:33.045229 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=17.424467938 podStartE2EDuration="27.045213657s" podCreationTimestamp="2026-02-16 00:31:06 +0000 UTC" firstStartedPulling="2026-02-16 00:31:22.910142654 +0000 UTC m=+1492.568041416" lastFinishedPulling="2026-02-16 00:31:32.530888373 +0000 UTC m=+1502.188787135" observedRunningTime="2026-02-16 00:31:33.040989565 +0000 UTC m=+1502.698888327" watchObservedRunningTime="2026-02-16 00:31:33.045213657 +0000 UTC m=+1502.703112409" Feb 16 00:31:34 crc kubenswrapper[4698]: I0216 00:31:34.529635 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq"] Feb 16 00:31:34 crc kubenswrapper[4698]: I0216 00:31:34.530837 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" Feb 16 00:31:34 crc kubenswrapper[4698]: I0216 00:31:34.536597 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Feb 16 00:31:34 crc kubenswrapper[4698]: I0216 00:31:34.536923 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Feb 16 00:31:34 crc kubenswrapper[4698]: I0216 00:31:34.554049 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq"] Feb 16 00:31:34 crc kubenswrapper[4698]: I0216 00:31:34.722099 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/006f7f53-12d8-4372-9a72-d7ed8e42a53f-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq\" (UID: \"006f7f53-12d8-4372-9a72-d7ed8e42a53f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" Feb 16 00:31:34 crc kubenswrapper[4698]: I0216 00:31:34.722149 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pckdb\" (UniqueName: \"kubernetes.io/projected/006f7f53-12d8-4372-9a72-d7ed8e42a53f-kube-api-access-pckdb\") pod \"default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq\" (UID: \"006f7f53-12d8-4372-9a72-d7ed8e42a53f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" Feb 16 00:31:34 crc kubenswrapper[4698]: I0216 00:31:34.722267 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/006f7f53-12d8-4372-9a72-d7ed8e42a53f-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq\" (UID: \"006f7f53-12d8-4372-9a72-d7ed8e42a53f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" Feb 16 00:31:34 crc kubenswrapper[4698]: I0216 00:31:34.722291 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/006f7f53-12d8-4372-9a72-d7ed8e42a53f-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq\" (UID: \"006f7f53-12d8-4372-9a72-d7ed8e42a53f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" Feb 16 00:31:34 crc kubenswrapper[4698]: I0216 00:31:34.823939 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/006f7f53-12d8-4372-9a72-d7ed8e42a53f-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq\" (UID: \"006f7f53-12d8-4372-9a72-d7ed8e42a53f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" Feb 16 00:31:34 crc kubenswrapper[4698]: I0216 00:31:34.824003 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/006f7f53-12d8-4372-9a72-d7ed8e42a53f-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq\" (UID: \"006f7f53-12d8-4372-9a72-d7ed8e42a53f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" Feb 16 00:31:34 crc kubenswrapper[4698]: I0216 00:31:34.824046 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/006f7f53-12d8-4372-9a72-d7ed8e42a53f-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq\" (UID: \"006f7f53-12d8-4372-9a72-d7ed8e42a53f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" Feb 16 00:31:34 crc kubenswrapper[4698]: I0216 00:31:34.824091 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pckdb\" (UniqueName: \"kubernetes.io/projected/006f7f53-12d8-4372-9a72-d7ed8e42a53f-kube-api-access-pckdb\") pod \"default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq\" (UID: \"006f7f53-12d8-4372-9a72-d7ed8e42a53f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" Feb 16 00:31:34 crc kubenswrapper[4698]: I0216 00:31:34.824820 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/006f7f53-12d8-4372-9a72-d7ed8e42a53f-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq\" (UID: \"006f7f53-12d8-4372-9a72-d7ed8e42a53f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" Feb 16 00:31:34 crc kubenswrapper[4698]: I0216 00:31:34.825447 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/006f7f53-12d8-4372-9a72-d7ed8e42a53f-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq\" (UID: \"006f7f53-12d8-4372-9a72-d7ed8e42a53f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" Feb 16 00:31:34 crc kubenswrapper[4698]: I0216 00:31:34.838302 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/006f7f53-12d8-4372-9a72-d7ed8e42a53f-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq\" (UID: \"006f7f53-12d8-4372-9a72-d7ed8e42a53f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" Feb 16 00:31:34 crc kubenswrapper[4698]: I0216 00:31:34.846155 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pckdb\" (UniqueName: \"kubernetes.io/projected/006f7f53-12d8-4372-9a72-d7ed8e42a53f-kube-api-access-pckdb\") pod \"default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq\" (UID: \"006f7f53-12d8-4372-9a72-d7ed8e42a53f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" Feb 16 00:31:34 crc kubenswrapper[4698]: I0216 00:31:34.851004 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" Feb 16 00:31:35 crc kubenswrapper[4698]: I0216 00:31:35.220791 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm"] Feb 16 00:31:35 crc kubenswrapper[4698]: I0216 00:31:35.222735 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" Feb 16 00:31:35 crc kubenswrapper[4698]: I0216 00:31:35.225110 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Feb 16 00:31:35 crc kubenswrapper[4698]: I0216 00:31:35.243565 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm"] Feb 16 00:31:35 crc kubenswrapper[4698]: I0216 00:31:35.332774 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78c5f\" (UniqueName: \"kubernetes.io/projected/b4421358-74ea-4070-b840-e5fb3fa20124-kube-api-access-78c5f\") pod \"default-cloud1-ceil-event-smartgateway-57487576b-276hm\" (UID: \"b4421358-74ea-4070-b840-e5fb3fa20124\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" Feb 16 00:31:35 crc kubenswrapper[4698]: I0216 00:31:35.332831 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b4421358-74ea-4070-b840-e5fb3fa20124-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-57487576b-276hm\" (UID: \"b4421358-74ea-4070-b840-e5fb3fa20124\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" Feb 16 00:31:35 crc kubenswrapper[4698]: I0216 00:31:35.332900 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b4421358-74ea-4070-b840-e5fb3fa20124-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-57487576b-276hm\" (UID: \"b4421358-74ea-4070-b840-e5fb3fa20124\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" Feb 16 00:31:35 crc kubenswrapper[4698]: I0216 00:31:35.332995 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/b4421358-74ea-4070-b840-e5fb3fa20124-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-57487576b-276hm\" (UID: \"b4421358-74ea-4070-b840-e5fb3fa20124\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" Feb 16 00:31:35 crc kubenswrapper[4698]: I0216 00:31:35.434473 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78c5f\" (UniqueName: \"kubernetes.io/projected/b4421358-74ea-4070-b840-e5fb3fa20124-kube-api-access-78c5f\") pod \"default-cloud1-ceil-event-smartgateway-57487576b-276hm\" (UID: \"b4421358-74ea-4070-b840-e5fb3fa20124\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" Feb 16 00:31:35 crc kubenswrapper[4698]: I0216 00:31:35.434527 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b4421358-74ea-4070-b840-e5fb3fa20124-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-57487576b-276hm\" (UID: \"b4421358-74ea-4070-b840-e5fb3fa20124\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" Feb 16 00:31:35 crc kubenswrapper[4698]: I0216 00:31:35.434581 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b4421358-74ea-4070-b840-e5fb3fa20124-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-57487576b-276hm\" (UID: \"b4421358-74ea-4070-b840-e5fb3fa20124\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" Feb 16 00:31:35 crc kubenswrapper[4698]: I0216 00:31:35.434669 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/b4421358-74ea-4070-b840-e5fb3fa20124-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-57487576b-276hm\" (UID: \"b4421358-74ea-4070-b840-e5fb3fa20124\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" Feb 16 00:31:35 crc kubenswrapper[4698]: I0216 00:31:35.436143 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b4421358-74ea-4070-b840-e5fb3fa20124-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-57487576b-276hm\" (UID: \"b4421358-74ea-4070-b840-e5fb3fa20124\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" Feb 16 00:31:35 crc kubenswrapper[4698]: I0216 00:31:35.436634 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b4421358-74ea-4070-b840-e5fb3fa20124-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-57487576b-276hm\" (UID: \"b4421358-74ea-4070-b840-e5fb3fa20124\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" Feb 16 00:31:35 crc kubenswrapper[4698]: I0216 00:31:35.452364 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/b4421358-74ea-4070-b840-e5fb3fa20124-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-57487576b-276hm\" (UID: \"b4421358-74ea-4070-b840-e5fb3fa20124\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" Feb 16 00:31:35 crc kubenswrapper[4698]: I0216 00:31:35.465486 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78c5f\" (UniqueName: \"kubernetes.io/projected/b4421358-74ea-4070-b840-e5fb3fa20124-kube-api-access-78c5f\") pod \"default-cloud1-ceil-event-smartgateway-57487576b-276hm\" (UID: \"b4421358-74ea-4070-b840-e5fb3fa20124\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" Feb 16 00:31:35 crc kubenswrapper[4698]: I0216 00:31:35.545501 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" Feb 16 00:31:37 crc kubenswrapper[4698]: I0216 00:31:37.065397 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" event={"ID":"8de7b9dc-c355-4830-a8c4-397a66fea53b","Type":"ContainerStarted","Data":"33e5acbee7e5f75ed2e9c54db473cbfc6a6c86a9729ca6c8138721af8ca32c94"} Feb 16 00:31:37 crc kubenswrapper[4698]: I0216 00:31:37.076709 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq"] Feb 16 00:31:37 crc kubenswrapper[4698]: I0216 00:31:37.078676 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" event={"ID":"ee87103e-a39d-4f01-9843-26056d8805a6","Type":"ContainerStarted","Data":"5d37032ac279ab42f3b4af8419ac5c54e5fab99df1acfe1cdf64a1d8f8a428d1"} Feb 16 00:31:37 crc kubenswrapper[4698]: I0216 00:31:37.081642 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" event={"ID":"08930fbb-e669-42c6-a2b1-e36b32415a75","Type":"ContainerStarted","Data":"9e33b8bca79c570d06d5ace84a99ea3976fac88f81a40ab96278bacd895dd0e6"} Feb 16 00:31:37 crc kubenswrapper[4698]: I0216 00:31:37.086508 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" podStartSLOduration=10.363146782 podStartE2EDuration="17.086495406s" podCreationTimestamp="2026-02-16 00:31:20 +0000 UTC" firstStartedPulling="2026-02-16 00:31:29.971074972 +0000 UTC m=+1499.628973734" lastFinishedPulling="2026-02-16 00:31:36.694423596 +0000 UTC m=+1506.352322358" observedRunningTime="2026-02-16 00:31:37.082858542 +0000 UTC m=+1506.740757304" watchObservedRunningTime="2026-02-16 00:31:37.086495406 +0000 UTC m=+1506.744394168" Feb 16 00:31:37 crc kubenswrapper[4698]: I0216 00:31:37.110113 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm"] Feb 16 00:31:37 crc kubenswrapper[4698]: W0216 00:31:37.123182 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4421358_74ea_4070_b840_e5fb3fa20124.slice/crio-dfcb00763b460ad6d641a0d445365fafbf7fcf8371e997ca5a309ebe0fe7bfd7 WatchSource:0}: Error finding container dfcb00763b460ad6d641a0d445365fafbf7fcf8371e997ca5a309ebe0fe7bfd7: Status 404 returned error can't find the container with id dfcb00763b460ad6d641a0d445365fafbf7fcf8371e997ca5a309ebe0fe7bfd7 Feb 16 00:31:37 crc kubenswrapper[4698]: I0216 00:31:37.129816 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" podStartSLOduration=7.365940838 podStartE2EDuration="14.129797833s" podCreationTimestamp="2026-02-16 00:31:23 +0000 UTC" firstStartedPulling="2026-02-16 00:31:29.970783973 +0000 UTC m=+1499.628682735" lastFinishedPulling="2026-02-16 00:31:36.734640968 +0000 UTC m=+1506.392539730" observedRunningTime="2026-02-16 00:31:37.116662674 +0000 UTC m=+1506.774561436" watchObservedRunningTime="2026-02-16 00:31:37.129797833 +0000 UTC m=+1506.787696595" Feb 16 00:31:37 crc kubenswrapper[4698]: I0216 00:31:37.143820 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" podStartSLOduration=3.51508517 podStartE2EDuration="10.143799229s" podCreationTimestamp="2026-02-16 00:31:27 +0000 UTC" firstStartedPulling="2026-02-16 00:31:30.10085318 +0000 UTC m=+1499.758751942" lastFinishedPulling="2026-02-16 00:31:36.729567239 +0000 UTC m=+1506.387466001" observedRunningTime="2026-02-16 00:31:37.137450921 +0000 UTC m=+1506.795349683" watchObservedRunningTime="2026-02-16 00:31:37.143799229 +0000 UTC m=+1506.801697991" Feb 16 00:31:38 crc kubenswrapper[4698]: I0216 00:31:38.090054 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" event={"ID":"006f7f53-12d8-4372-9a72-d7ed8e42a53f","Type":"ContainerStarted","Data":"5683f584034604b8f21e3423d47ff3dde77a6e7c77a988854bed16140001c1c8"} Feb 16 00:31:38 crc kubenswrapper[4698]: I0216 00:31:38.090453 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" event={"ID":"006f7f53-12d8-4372-9a72-d7ed8e42a53f","Type":"ContainerStarted","Data":"ce1f15a2957a15f89d57cc4d75cf58b4b88bad6a5560b4696c996e0cd39e3640"} Feb 16 00:31:38 crc kubenswrapper[4698]: I0216 00:31:38.090473 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" event={"ID":"006f7f53-12d8-4372-9a72-d7ed8e42a53f","Type":"ContainerStarted","Data":"75ae37af5916d631df43f43c3ab57ec6108d053c6ea6f56b2e2996ace692a540"} Feb 16 00:31:38 crc kubenswrapper[4698]: I0216 00:31:38.093960 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" event={"ID":"b4421358-74ea-4070-b840-e5fb3fa20124","Type":"ContainerStarted","Data":"d702aae793495f186bc29112e7fb9058f6569de2f362b8adfd3781f64f7678ce"} Feb 16 00:31:38 crc kubenswrapper[4698]: I0216 00:31:38.094009 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" event={"ID":"b4421358-74ea-4070-b840-e5fb3fa20124","Type":"ContainerStarted","Data":"38a4f4557f15038ae0cbbb56213451ae8090d2ea86d106f52c18f456b9994bee"} Feb 16 00:31:38 crc kubenswrapper[4698]: I0216 00:31:38.094023 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" event={"ID":"b4421358-74ea-4070-b840-e5fb3fa20124","Type":"ContainerStarted","Data":"dfcb00763b460ad6d641a0d445365fafbf7fcf8371e997ca5a309ebe0fe7bfd7"} Feb 16 00:31:38 crc kubenswrapper[4698]: I0216 00:31:38.110067 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" podStartSLOduration=3.765485142 podStartE2EDuration="4.110053634s" podCreationTimestamp="2026-02-16 00:31:34 +0000 UTC" firstStartedPulling="2026-02-16 00:31:37.071841159 +0000 UTC m=+1506.729739921" lastFinishedPulling="2026-02-16 00:31:37.416409651 +0000 UTC m=+1507.074308413" observedRunningTime="2026-02-16 00:31:38.105209274 +0000 UTC m=+1507.763108036" watchObservedRunningTime="2026-02-16 00:31:38.110053634 +0000 UTC m=+1507.767952396" Feb 16 00:31:38 crc kubenswrapper[4698]: I0216 00:31:38.127738 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" podStartSLOduration=2.831297821 podStartE2EDuration="3.127715174s" podCreationTimestamp="2026-02-16 00:31:35 +0000 UTC" firstStartedPulling="2026-02-16 00:31:37.126910814 +0000 UTC m=+1506.784809576" lastFinishedPulling="2026-02-16 00:31:37.423328167 +0000 UTC m=+1507.081226929" observedRunningTime="2026-02-16 00:31:38.121605834 +0000 UTC m=+1507.779504596" watchObservedRunningTime="2026-02-16 00:31:38.127715174 +0000 UTC m=+1507.785613936" Feb 16 00:31:39 crc kubenswrapper[4698]: I0216 00:31:39.881896 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Feb 16 00:31:39 crc kubenswrapper[4698]: I0216 00:31:39.943587 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Feb 16 00:31:40 crc kubenswrapper[4698]: I0216 00:31:40.153767 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Feb 16 00:31:42 crc kubenswrapper[4698]: I0216 00:31:42.231161 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:31:42 crc kubenswrapper[4698]: E0216 00:31:42.231772 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.273392 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-p7772"] Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.274113 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-p7772" podUID="ffc063fe-f10a-45b0-9ad8-789923ddbbef" containerName="default-interconnect" containerID="cri-o://bb75c8accbe3bd9e99a333ae4a102dfb78729fe58cf2b322b3f6c7e429a59e57" gracePeriod=30 Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.734412 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.817366 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnknj\" (UniqueName: \"kubernetes.io/projected/ffc063fe-f10a-45b0-9ad8-789923ddbbef-kube-api-access-jnknj\") pod \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.817439 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-inter-router-credentials\") pod \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.817476 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/ffc063fe-f10a-45b0-9ad8-789923ddbbef-sasl-config\") pod \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.817516 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-inter-router-ca\") pod \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.817543 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-openstack-credentials\") pod \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.817566 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-sasl-users\") pod \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.817648 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-openstack-ca\") pod \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\" (UID: \"ffc063fe-f10a-45b0-9ad8-789923ddbbef\") " Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.818579 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffc063fe-f10a-45b0-9ad8-789923ddbbef-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "ffc063fe-f10a-45b0-9ad8-789923ddbbef" (UID: "ffc063fe-f10a-45b0-9ad8-789923ddbbef"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.822907 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "ffc063fe-f10a-45b0-9ad8-789923ddbbef" (UID: "ffc063fe-f10a-45b0-9ad8-789923ddbbef"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.824738 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "ffc063fe-f10a-45b0-9ad8-789923ddbbef" (UID: "ffc063fe-f10a-45b0-9ad8-789923ddbbef"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.824849 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "ffc063fe-f10a-45b0-9ad8-789923ddbbef" (UID: "ffc063fe-f10a-45b0-9ad8-789923ddbbef"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.834428 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "ffc063fe-f10a-45b0-9ad8-789923ddbbef" (UID: "ffc063fe-f10a-45b0-9ad8-789923ddbbef"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.837669 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc063fe-f10a-45b0-9ad8-789923ddbbef-kube-api-access-jnknj" (OuterVolumeSpecName: "kube-api-access-jnknj") pod "ffc063fe-f10a-45b0-9ad8-789923ddbbef" (UID: "ffc063fe-f10a-45b0-9ad8-789923ddbbef"). InnerVolumeSpecName "kube-api-access-jnknj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.837822 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "ffc063fe-f10a-45b0-9ad8-789923ddbbef" (UID: "ffc063fe-f10a-45b0-9ad8-789923ddbbef"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.919053 4698 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.919089 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnknj\" (UniqueName: \"kubernetes.io/projected/ffc063fe-f10a-45b0-9ad8-789923ddbbef-kube-api-access-jnknj\") on node \"crc\" DevicePath \"\"" Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.919099 4698 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.919110 4698 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/ffc063fe-f10a-45b0-9ad8-789923ddbbef-sasl-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.919119 4698 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.919128 4698 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Feb 16 00:31:47 crc kubenswrapper[4698]: I0216 00:31:47.919137 4698 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/ffc063fe-f10a-45b0-9ad8-789923ddbbef-sasl-users\") on node \"crc\" DevicePath \"\"" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.158456 4698 generic.go:334] "Generic (PLEG): container finished" podID="ee87103e-a39d-4f01-9843-26056d8805a6" containerID="3d2db9deb33a2f753514506e320001dfd62aff36e2319fd726e5d426929924b5" exitCode=0 Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.158546 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" event={"ID":"ee87103e-a39d-4f01-9843-26056d8805a6","Type":"ContainerDied","Data":"3d2db9deb33a2f753514506e320001dfd62aff36e2319fd726e5d426929924b5"} Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.159445 4698 scope.go:117] "RemoveContainer" containerID="3d2db9deb33a2f753514506e320001dfd62aff36e2319fd726e5d426929924b5" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.160376 4698 generic.go:334] "Generic (PLEG): container finished" podID="006f7f53-12d8-4372-9a72-d7ed8e42a53f" containerID="ce1f15a2957a15f89d57cc4d75cf58b4b88bad6a5560b4696c996e0cd39e3640" exitCode=0 Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.160451 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" event={"ID":"006f7f53-12d8-4372-9a72-d7ed8e42a53f","Type":"ContainerDied","Data":"ce1f15a2957a15f89d57cc4d75cf58b4b88bad6a5560b4696c996e0cd39e3640"} Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.160728 4698 scope.go:117] "RemoveContainer" containerID="ce1f15a2957a15f89d57cc4d75cf58b4b88bad6a5560b4696c996e0cd39e3640" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.165336 4698 generic.go:334] "Generic (PLEG): container finished" podID="8de7b9dc-c355-4830-a8c4-397a66fea53b" containerID="8e7864cccd11a21bda1c672034a8e5a93ebacbeb7469668e2f0e760fd5dff3e8" exitCode=0 Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.165444 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" event={"ID":"8de7b9dc-c355-4830-a8c4-397a66fea53b","Type":"ContainerDied","Data":"8e7864cccd11a21bda1c672034a8e5a93ebacbeb7469668e2f0e760fd5dff3e8"} Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.166455 4698 scope.go:117] "RemoveContainer" containerID="8e7864cccd11a21bda1c672034a8e5a93ebacbeb7469668e2f0e760fd5dff3e8" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.167567 4698 generic.go:334] "Generic (PLEG): container finished" podID="b4421358-74ea-4070-b840-e5fb3fa20124" containerID="38a4f4557f15038ae0cbbb56213451ae8090d2ea86d106f52c18f456b9994bee" exitCode=0 Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.167657 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" event={"ID":"b4421358-74ea-4070-b840-e5fb3fa20124","Type":"ContainerDied","Data":"38a4f4557f15038ae0cbbb56213451ae8090d2ea86d106f52c18f456b9994bee"} Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.168286 4698 scope.go:117] "RemoveContainer" containerID="38a4f4557f15038ae0cbbb56213451ae8090d2ea86d106f52c18f456b9994bee" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.171660 4698 generic.go:334] "Generic (PLEG): container finished" podID="08930fbb-e669-42c6-a2b1-e36b32415a75" containerID="2853aab95d3937494677732723193752f97b47f333cd41792efe78d0254656d3" exitCode=0 Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.171744 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" event={"ID":"08930fbb-e669-42c6-a2b1-e36b32415a75","Type":"ContainerDied","Data":"2853aab95d3937494677732723193752f97b47f333cd41792efe78d0254656d3"} Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.172332 4698 scope.go:117] "RemoveContainer" containerID="2853aab95d3937494677732723193752f97b47f333cd41792efe78d0254656d3" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.175224 4698 generic.go:334] "Generic (PLEG): container finished" podID="ffc063fe-f10a-45b0-9ad8-789923ddbbef" containerID="bb75c8accbe3bd9e99a333ae4a102dfb78729fe58cf2b322b3f6c7e429a59e57" exitCode=0 Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.175344 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-p7772" event={"ID":"ffc063fe-f10a-45b0-9ad8-789923ddbbef","Type":"ContainerDied","Data":"bb75c8accbe3bd9e99a333ae4a102dfb78729fe58cf2b322b3f6c7e429a59e57"} Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.175447 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-p7772" event={"ID":"ffc063fe-f10a-45b0-9ad8-789923ddbbef","Type":"ContainerDied","Data":"491308413a01a4e43292715ff3add13a8cc8d9b9f785fe162a6db17ad357b01b"} Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.175533 4698 scope.go:117] "RemoveContainer" containerID="bb75c8accbe3bd9e99a333ae4a102dfb78729fe58cf2b322b3f6c7e429a59e57" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.175875 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-p7772" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.213993 4698 scope.go:117] "RemoveContainer" containerID="bb75c8accbe3bd9e99a333ae4a102dfb78729fe58cf2b322b3f6c7e429a59e57" Feb 16 00:31:48 crc kubenswrapper[4698]: E0216 00:31:48.218747 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb75c8accbe3bd9e99a333ae4a102dfb78729fe58cf2b322b3f6c7e429a59e57\": container with ID starting with bb75c8accbe3bd9e99a333ae4a102dfb78729fe58cf2b322b3f6c7e429a59e57 not found: ID does not exist" containerID="bb75c8accbe3bd9e99a333ae4a102dfb78729fe58cf2b322b3f6c7e429a59e57" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.218825 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb75c8accbe3bd9e99a333ae4a102dfb78729fe58cf2b322b3f6c7e429a59e57"} err="failed to get container status \"bb75c8accbe3bd9e99a333ae4a102dfb78729fe58cf2b322b3f6c7e429a59e57\": rpc error: code = NotFound desc = could not find container \"bb75c8accbe3bd9e99a333ae4a102dfb78729fe58cf2b322b3f6c7e429a59e57\": container with ID starting with bb75c8accbe3bd9e99a333ae4a102dfb78729fe58cf2b322b3f6c7e429a59e57 not found: ID does not exist" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.359673 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-p7772"] Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.382012 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-z7c5h"] Feb 16 00:31:48 crc kubenswrapper[4698]: E0216 00:31:48.382403 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc063fe-f10a-45b0-9ad8-789923ddbbef" containerName="default-interconnect" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.382423 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc063fe-f10a-45b0-9ad8-789923ddbbef" containerName="default-interconnect" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.382564 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffc063fe-f10a-45b0-9ad8-789923ddbbef" containerName="default-interconnect" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.384334 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.387122 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.392710 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-lnqst" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.392947 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.393015 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.395348 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.396267 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.396364 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.399144 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-p7772"] Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.421220 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-z7c5h"] Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.533870 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/047df03c-6047-4376-a3c5-4d5c734da56a-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.533944 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/047df03c-6047-4376-a3c5-4d5c734da56a-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.533976 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/047df03c-6047-4376-a3c5-4d5c734da56a-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.534011 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/047df03c-6047-4376-a3c5-4d5c734da56a-sasl-config\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.534053 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb7xv\" (UniqueName: \"kubernetes.io/projected/047df03c-6047-4376-a3c5-4d5c734da56a-kube-api-access-pb7xv\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.534086 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/047df03c-6047-4376-a3c5-4d5c734da56a-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.534120 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/047df03c-6047-4376-a3c5-4d5c734da56a-sasl-users\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.636011 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/047df03c-6047-4376-a3c5-4d5c734da56a-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.636345 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/047df03c-6047-4376-a3c5-4d5c734da56a-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.636505 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/047df03c-6047-4376-a3c5-4d5c734da56a-sasl-config\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.636732 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb7xv\" (UniqueName: \"kubernetes.io/projected/047df03c-6047-4376-a3c5-4d5c734da56a-kube-api-access-pb7xv\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.636887 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/047df03c-6047-4376-a3c5-4d5c734da56a-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.637068 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/047df03c-6047-4376-a3c5-4d5c734da56a-sasl-users\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.637259 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/047df03c-6047-4376-a3c5-4d5c734da56a-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.637795 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/047df03c-6047-4376-a3c5-4d5c734da56a-sasl-config\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.643691 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/047df03c-6047-4376-a3c5-4d5c734da56a-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.645100 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/047df03c-6047-4376-a3c5-4d5c734da56a-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.648888 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/047df03c-6047-4376-a3c5-4d5c734da56a-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.648909 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/047df03c-6047-4376-a3c5-4d5c734da56a-sasl-users\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.659019 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb7xv\" (UniqueName: \"kubernetes.io/projected/047df03c-6047-4376-a3c5-4d5c734da56a-kube-api-access-pb7xv\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.661367 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/047df03c-6047-4376-a3c5-4d5c734da56a-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-z7c5h\" (UID: \"047df03c-6047-4376-a3c5-4d5c734da56a\") " pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:48 crc kubenswrapper[4698]: I0216 00:31:48.715165 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" Feb 16 00:31:49 crc kubenswrapper[4698]: I0216 00:31:49.183626 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" event={"ID":"ee87103e-a39d-4f01-9843-26056d8805a6","Type":"ContainerStarted","Data":"843c6030fbb1ca955b7a483e3259bdf9d01663fb641baa75c834910b3d1416d7"} Feb 16 00:31:49 crc kubenswrapper[4698]: I0216 00:31:49.185456 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" event={"ID":"8de7b9dc-c355-4830-a8c4-397a66fea53b","Type":"ContainerStarted","Data":"0cd4d80e8daf52d1f03d6bad559869fb13a65ba05e041b177d393b79e06d2ef2"} Feb 16 00:31:49 crc kubenswrapper[4698]: I0216 00:31:49.187652 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" event={"ID":"006f7f53-12d8-4372-9a72-d7ed8e42a53f","Type":"ContainerStarted","Data":"75e14f25a7cfcfbbaa2978044094e4f671cc25bbf58eaa3d110ed9177462b50a"} Feb 16 00:31:49 crc kubenswrapper[4698]: I0216 00:31:49.193952 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" event={"ID":"08930fbb-e669-42c6-a2b1-e36b32415a75","Type":"ContainerStarted","Data":"6ed1d35c53d776264ac46fd7869958894376cfbf5ccd5a1d521055ffd3b91689"} Feb 16 00:31:49 crc kubenswrapper[4698]: I0216 00:31:49.196175 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" event={"ID":"b4421358-74ea-4070-b840-e5fb3fa20124","Type":"ContainerStarted","Data":"1a6068a9c3cc750836150278a2032949f23e9d6f9135d5f6987289465b275567"} Feb 16 00:31:49 crc kubenswrapper[4698]: I0216 00:31:49.241078 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffc063fe-f10a-45b0-9ad8-789923ddbbef" path="/var/lib/kubelet/pods/ffc063fe-f10a-45b0-9ad8-789923ddbbef/volumes" Feb 16 00:31:49 crc kubenswrapper[4698]: I0216 00:31:49.329302 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-z7c5h"] Feb 16 00:31:49 crc kubenswrapper[4698]: W0216 00:31:49.330030 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod047df03c_6047_4376_a3c5_4d5c734da56a.slice/crio-c34d72420ecda7fe3471a7c1edd8091dab5ebb60e0c2a2337e007daaa745f1e2 WatchSource:0}: Error finding container c34d72420ecda7fe3471a7c1edd8091dab5ebb60e0c2a2337e007daaa745f1e2: Status 404 returned error can't find the container with id c34d72420ecda7fe3471a7c1edd8091dab5ebb60e0c2a2337e007daaa745f1e2 Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.211294 4698 generic.go:334] "Generic (PLEG): container finished" podID="ee87103e-a39d-4f01-9843-26056d8805a6" containerID="843c6030fbb1ca955b7a483e3259bdf9d01663fb641baa75c834910b3d1416d7" exitCode=0 Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.211359 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" event={"ID":"ee87103e-a39d-4f01-9843-26056d8805a6","Type":"ContainerDied","Data":"843c6030fbb1ca955b7a483e3259bdf9d01663fb641baa75c834910b3d1416d7"} Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.211633 4698 scope.go:117] "RemoveContainer" containerID="3d2db9deb33a2f753514506e320001dfd62aff36e2319fd726e5d426929924b5" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.212127 4698 scope.go:117] "RemoveContainer" containerID="843c6030fbb1ca955b7a483e3259bdf9d01663fb641baa75c834910b3d1416d7" Feb 16 00:31:50 crc kubenswrapper[4698]: E0216 00:31:50.212360 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj_service-telemetry(ee87103e-a39d-4f01-9843-26056d8805a6)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" podUID="ee87103e-a39d-4f01-9843-26056d8805a6" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.214332 4698 generic.go:334] "Generic (PLEG): container finished" podID="8de7b9dc-c355-4830-a8c4-397a66fea53b" containerID="0cd4d80e8daf52d1f03d6bad559869fb13a65ba05e041b177d393b79e06d2ef2" exitCode=0 Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.214379 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" event={"ID":"8de7b9dc-c355-4830-a8c4-397a66fea53b","Type":"ContainerDied","Data":"0cd4d80e8daf52d1f03d6bad559869fb13a65ba05e041b177d393b79e06d2ef2"} Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.214742 4698 scope.go:117] "RemoveContainer" containerID="0cd4d80e8daf52d1f03d6bad559869fb13a65ba05e041b177d393b79e06d2ef2" Feb 16 00:31:50 crc kubenswrapper[4698]: E0216 00:31:50.214919 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf_service-telemetry(8de7b9dc-c355-4830-a8c4-397a66fea53b)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" podUID="8de7b9dc-c355-4830-a8c4-397a66fea53b" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.216423 4698 generic.go:334] "Generic (PLEG): container finished" podID="006f7f53-12d8-4372-9a72-d7ed8e42a53f" containerID="75e14f25a7cfcfbbaa2978044094e4f671cc25bbf58eaa3d110ed9177462b50a" exitCode=0 Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.216459 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" event={"ID":"006f7f53-12d8-4372-9a72-d7ed8e42a53f","Type":"ContainerDied","Data":"75e14f25a7cfcfbbaa2978044094e4f671cc25bbf58eaa3d110ed9177462b50a"} Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.216719 4698 scope.go:117] "RemoveContainer" containerID="75e14f25a7cfcfbbaa2978044094e4f671cc25bbf58eaa3d110ed9177462b50a" Feb 16 00:31:50 crc kubenswrapper[4698]: E0216 00:31:50.216867 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq_service-telemetry(006f7f53-12d8-4372-9a72-d7ed8e42a53f)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" podUID="006f7f53-12d8-4372-9a72-d7ed8e42a53f" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.222798 4698 generic.go:334] "Generic (PLEG): container finished" podID="08930fbb-e669-42c6-a2b1-e36b32415a75" containerID="6ed1d35c53d776264ac46fd7869958894376cfbf5ccd5a1d521055ffd3b91689" exitCode=0 Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.222848 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" event={"ID":"08930fbb-e669-42c6-a2b1-e36b32415a75","Type":"ContainerDied","Data":"6ed1d35c53d776264ac46fd7869958894376cfbf5ccd5a1d521055ffd3b91689"} Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.223156 4698 scope.go:117] "RemoveContainer" containerID="6ed1d35c53d776264ac46fd7869958894376cfbf5ccd5a1d521055ffd3b91689" Feb 16 00:31:50 crc kubenswrapper[4698]: E0216 00:31:50.223305 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-5759b4d97-779sn_service-telemetry(08930fbb-e669-42c6-a2b1-e36b32415a75)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" podUID="08930fbb-e669-42c6-a2b1-e36b32415a75" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.228081 4698 generic.go:334] "Generic (PLEG): container finished" podID="b4421358-74ea-4070-b840-e5fb3fa20124" containerID="1a6068a9c3cc750836150278a2032949f23e9d6f9135d5f6987289465b275567" exitCode=0 Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.228172 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" event={"ID":"b4421358-74ea-4070-b840-e5fb3fa20124","Type":"ContainerDied","Data":"1a6068a9c3cc750836150278a2032949f23e9d6f9135d5f6987289465b275567"} Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.228788 4698 scope.go:117] "RemoveContainer" containerID="1a6068a9c3cc750836150278a2032949f23e9d6f9135d5f6987289465b275567" Feb 16 00:31:50 crc kubenswrapper[4698]: E0216 00:31:50.229023 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-57487576b-276hm_service-telemetry(b4421358-74ea-4070-b840-e5fb3fa20124)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" podUID="b4421358-74ea-4070-b840-e5fb3fa20124" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.230917 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" event={"ID":"047df03c-6047-4376-a3c5-4d5c734da56a","Type":"ContainerStarted","Data":"581a7683a71fd8f7aa92f2b610911ee02a6253cd6f79dfc433ba976dcd0fa907"} Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.230954 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" event={"ID":"047df03c-6047-4376-a3c5-4d5c734da56a","Type":"ContainerStarted","Data":"c34d72420ecda7fe3471a7c1edd8091dab5ebb60e0c2a2337e007daaa745f1e2"} Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.252111 4698 scope.go:117] "RemoveContainer" containerID="8e7864cccd11a21bda1c672034a8e5a93ebacbeb7469668e2f0e760fd5dff3e8" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.293984 4698 scope.go:117] "RemoveContainer" containerID="ce1f15a2957a15f89d57cc4d75cf58b4b88bad6a5560b4696c996e0cd39e3640" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.341015 4698 scope.go:117] "RemoveContainer" containerID="2853aab95d3937494677732723193752f97b47f333cd41792efe78d0254656d3" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.377902 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-z7c5h" podStartSLOduration=3.377884451 podStartE2EDuration="3.377884451s" podCreationTimestamp="2026-02-16 00:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:31:50.348837538 +0000 UTC m=+1520.006736310" watchObservedRunningTime="2026-02-16 00:31:50.377884451 +0000 UTC m=+1520.035783213" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.380796 4698 scope.go:117] "RemoveContainer" containerID="38a4f4557f15038ae0cbbb56213451ae8090d2ea86d106f52c18f456b9994bee" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.411666 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.412513 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.423395 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.423685 4698 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.427066 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.483491 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/9b9c31ea-9618-447d-9085-c0eb0d81a77e-qdr-test-config\") pod \"qdr-test\" (UID: \"9b9c31ea-9618-447d-9085-c0eb0d81a77e\") " pod="service-telemetry/qdr-test" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.483539 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npgcp\" (UniqueName: \"kubernetes.io/projected/9b9c31ea-9618-447d-9085-c0eb0d81a77e-kube-api-access-npgcp\") pod \"qdr-test\" (UID: \"9b9c31ea-9618-447d-9085-c0eb0d81a77e\") " pod="service-telemetry/qdr-test" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.483770 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/9b9c31ea-9618-447d-9085-c0eb0d81a77e-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"9b9c31ea-9618-447d-9085-c0eb0d81a77e\") " pod="service-telemetry/qdr-test" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.584899 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/9b9c31ea-9618-447d-9085-c0eb0d81a77e-qdr-test-config\") pod \"qdr-test\" (UID: \"9b9c31ea-9618-447d-9085-c0eb0d81a77e\") " pod="service-telemetry/qdr-test" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.584943 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npgcp\" (UniqueName: \"kubernetes.io/projected/9b9c31ea-9618-447d-9085-c0eb0d81a77e-kube-api-access-npgcp\") pod \"qdr-test\" (UID: \"9b9c31ea-9618-447d-9085-c0eb0d81a77e\") " pod="service-telemetry/qdr-test" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.585023 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/9b9c31ea-9618-447d-9085-c0eb0d81a77e-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"9b9c31ea-9618-447d-9085-c0eb0d81a77e\") " pod="service-telemetry/qdr-test" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.586322 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/9b9c31ea-9618-447d-9085-c0eb0d81a77e-qdr-test-config\") pod \"qdr-test\" (UID: \"9b9c31ea-9618-447d-9085-c0eb0d81a77e\") " pod="service-telemetry/qdr-test" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.590214 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/9b9c31ea-9618-447d-9085-c0eb0d81a77e-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"9b9c31ea-9618-447d-9085-c0eb0d81a77e\") " pod="service-telemetry/qdr-test" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.602605 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npgcp\" (UniqueName: \"kubernetes.io/projected/9b9c31ea-9618-447d-9085-c0eb0d81a77e-kube-api-access-npgcp\") pod \"qdr-test\" (UID: \"9b9c31ea-9618-447d-9085-c0eb0d81a77e\") " pod="service-telemetry/qdr-test" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.739902 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 16 00:31:50 crc kubenswrapper[4698]: I0216 00:31:50.952745 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 16 00:31:50 crc kubenswrapper[4698]: W0216 00:31:50.957583 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b9c31ea_9618_447d_9085_c0eb0d81a77e.slice/crio-56d3939f2ac042e645f617e8f36b89d795ffc44db079208e4955719c44347aa7 WatchSource:0}: Error finding container 56d3939f2ac042e645f617e8f36b89d795ffc44db079208e4955719c44347aa7: Status 404 returned error can't find the container with id 56d3939f2ac042e645f617e8f36b89d795ffc44db079208e4955719c44347aa7 Feb 16 00:31:51 crc kubenswrapper[4698]: I0216 00:31:51.245227 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"9b9c31ea-9618-447d-9085-c0eb0d81a77e","Type":"ContainerStarted","Data":"56d3939f2ac042e645f617e8f36b89d795ffc44db079208e4955719c44347aa7"} Feb 16 00:31:54 crc kubenswrapper[4698]: I0216 00:31:54.231918 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:31:54 crc kubenswrapper[4698]: E0216 00:31:54.232832 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.318163 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"9b9c31ea-9618-447d-9085-c0eb0d81a77e","Type":"ContainerStarted","Data":"5acc3babfd45260e9b08d3796dac1488f8ba31e5607ca8457ac26e88ebc8a72f"} Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.336782 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.8345643169999999 podStartE2EDuration="10.336757652s" podCreationTimestamp="2026-02-16 00:31:50 +0000 UTC" firstStartedPulling="2026-02-16 00:31:50.959970193 +0000 UTC m=+1520.617868945" lastFinishedPulling="2026-02-16 00:31:59.462163518 +0000 UTC m=+1529.120062280" observedRunningTime="2026-02-16 00:32:00.333101908 +0000 UTC m=+1529.991000670" watchObservedRunningTime="2026-02-16 00:32:00.336757652 +0000 UTC m=+1529.994656424" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.664299 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-9mg4g"] Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.665585 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.667438 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.667696 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.667661 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.667661 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.668072 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.669116 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.676846 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-9mg4g"] Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.730522 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.730590 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-ceilometer-publisher\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.730634 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-healthcheck-log\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.730746 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-sensubility-config\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.730802 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-collectd-config\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.730860 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.730895 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c24zx\" (UniqueName: \"kubernetes.io/projected/bfb61d19-7afa-44cc-acd1-4a1de2985850-kube-api-access-c24zx\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.832052 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c24zx\" (UniqueName: \"kubernetes.io/projected/bfb61d19-7afa-44cc-acd1-4a1de2985850-kube-api-access-c24zx\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.832170 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.832196 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-ceilometer-publisher\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.832219 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-healthcheck-log\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.832237 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-sensubility-config\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.832261 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-collectd-config\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.832287 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.833711 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.833734 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.833755 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-collectd-config\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.833885 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-healthcheck-log\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.833993 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-ceilometer-publisher\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.834002 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-sensubility-config\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.862847 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c24zx\" (UniqueName: \"kubernetes.io/projected/bfb61d19-7afa-44cc-acd1-4a1de2985850-kube-api-access-c24zx\") pod \"stf-smoketest-smoke1-9mg4g\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:00 crc kubenswrapper[4698]: I0216 00:32:00.987520 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:01 crc kubenswrapper[4698]: I0216 00:32:01.130655 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Feb 16 00:32:01 crc kubenswrapper[4698]: I0216 00:32:01.131756 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 16 00:32:01 crc kubenswrapper[4698]: I0216 00:32:01.147689 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 16 00:32:01 crc kubenswrapper[4698]: I0216 00:32:01.221314 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-9mg4g"] Feb 16 00:32:01 crc kubenswrapper[4698]: I0216 00:32:01.241662 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs4g8\" (UniqueName: \"kubernetes.io/projected/babfc81d-4839-4409-906c-d155bcc4b569-kube-api-access-bs4g8\") pod \"curl\" (UID: \"babfc81d-4839-4409-906c-d155bcc4b569\") " pod="service-telemetry/curl" Feb 16 00:32:01 crc kubenswrapper[4698]: I0216 00:32:01.244270 4698 scope.go:117] "RemoveContainer" containerID="1a6068a9c3cc750836150278a2032949f23e9d6f9135d5f6987289465b275567" Feb 16 00:32:01 crc kubenswrapper[4698]: I0216 00:32:01.341574 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-9mg4g" event={"ID":"bfb61d19-7afa-44cc-acd1-4a1de2985850","Type":"ContainerStarted","Data":"a839bef7ab4c328c314a198c46a8de8b77bf7e85dfae1d673ee399ad0a841a93"} Feb 16 00:32:01 crc kubenswrapper[4698]: I0216 00:32:01.342996 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs4g8\" (UniqueName: \"kubernetes.io/projected/babfc81d-4839-4409-906c-d155bcc4b569-kube-api-access-bs4g8\") pod \"curl\" (UID: \"babfc81d-4839-4409-906c-d155bcc4b569\") " pod="service-telemetry/curl" Feb 16 00:32:01 crc kubenswrapper[4698]: I0216 00:32:01.376694 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs4g8\" (UniqueName: \"kubernetes.io/projected/babfc81d-4839-4409-906c-d155bcc4b569-kube-api-access-bs4g8\") pod \"curl\" (UID: \"babfc81d-4839-4409-906c-d155bcc4b569\") " pod="service-telemetry/curl" Feb 16 00:32:01 crc kubenswrapper[4698]: I0216 00:32:01.457854 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 16 00:32:01 crc kubenswrapper[4698]: I0216 00:32:01.875456 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 16 00:32:01 crc kubenswrapper[4698]: W0216 00:32:01.880395 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbabfc81d_4839_4409_906c_d155bcc4b569.slice/crio-adaea055f93a1bdf087d09c99b7736aa3ff0531b5c343e0a8a13d1f0847814c3 WatchSource:0}: Error finding container adaea055f93a1bdf087d09c99b7736aa3ff0531b5c343e0a8a13d1f0847814c3: Status 404 returned error can't find the container with id adaea055f93a1bdf087d09c99b7736aa3ff0531b5c343e0a8a13d1f0847814c3 Feb 16 00:32:02 crc kubenswrapper[4698]: I0216 00:32:02.349690 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"babfc81d-4839-4409-906c-d155bcc4b569","Type":"ContainerStarted","Data":"adaea055f93a1bdf087d09c99b7736aa3ff0531b5c343e0a8a13d1f0847814c3"} Feb 16 00:32:02 crc kubenswrapper[4698]: I0216 00:32:02.352202 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57487576b-276hm" event={"ID":"b4421358-74ea-4070-b840-e5fb3fa20124","Type":"ContainerStarted","Data":"94a9ec4467c3070f8746fce3a882f8e6df8ea7bb1be42078d6e6f609092c9226"} Feb 16 00:32:04 crc kubenswrapper[4698]: I0216 00:32:04.231976 4698 scope.go:117] "RemoveContainer" containerID="6ed1d35c53d776264ac46fd7869958894376cfbf5ccd5a1d521055ffd3b91689" Feb 16 00:32:04 crc kubenswrapper[4698]: I0216 00:32:04.365152 4698 generic.go:334] "Generic (PLEG): container finished" podID="babfc81d-4839-4409-906c-d155bcc4b569" containerID="2e58cd847ee3aa6b8d956f8c5a62ef030f0fb4e25c6dab40d7a1c07a7da47390" exitCode=0 Feb 16 00:32:04 crc kubenswrapper[4698]: I0216 00:32:04.365207 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"babfc81d-4839-4409-906c-d155bcc4b569","Type":"ContainerDied","Data":"2e58cd847ee3aa6b8d956f8c5a62ef030f0fb4e25c6dab40d7a1c07a7da47390"} Feb 16 00:32:05 crc kubenswrapper[4698]: I0216 00:32:05.231847 4698 scope.go:117] "RemoveContainer" containerID="843c6030fbb1ca955b7a483e3259bdf9d01663fb641baa75c834910b3d1416d7" Feb 16 00:32:05 crc kubenswrapper[4698]: I0216 00:32:05.232353 4698 scope.go:117] "RemoveContainer" containerID="75e14f25a7cfcfbbaa2978044094e4f671cc25bbf58eaa3d110ed9177462b50a" Feb 16 00:32:05 crc kubenswrapper[4698]: I0216 00:32:05.232669 4698 scope.go:117] "RemoveContainer" containerID="0cd4d80e8daf52d1f03d6bad559869fb13a65ba05e041b177d393b79e06d2ef2" Feb 16 00:32:08 crc kubenswrapper[4698]: I0216 00:32:08.232401 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:32:08 crc kubenswrapper[4698]: E0216 00:32:08.233136 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:32:10 crc kubenswrapper[4698]: I0216 00:32:10.172606 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 16 00:32:10 crc kubenswrapper[4698]: I0216 00:32:10.310605 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_babfc81d-4839-4409-906c-d155bcc4b569/curl/0.log" Feb 16 00:32:10 crc kubenswrapper[4698]: I0216 00:32:10.372778 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs4g8\" (UniqueName: \"kubernetes.io/projected/babfc81d-4839-4409-906c-d155bcc4b569-kube-api-access-bs4g8\") pod \"babfc81d-4839-4409-906c-d155bcc4b569\" (UID: \"babfc81d-4839-4409-906c-d155bcc4b569\") " Feb 16 00:32:10 crc kubenswrapper[4698]: I0216 00:32:10.379554 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/babfc81d-4839-4409-906c-d155bcc4b569-kube-api-access-bs4g8" (OuterVolumeSpecName: "kube-api-access-bs4g8") pod "babfc81d-4839-4409-906c-d155bcc4b569" (UID: "babfc81d-4839-4409-906c-d155bcc4b569"). InnerVolumeSpecName "kube-api-access-bs4g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:32:10 crc kubenswrapper[4698]: I0216 00:32:10.419188 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"babfc81d-4839-4409-906c-d155bcc4b569","Type":"ContainerDied","Data":"adaea055f93a1bdf087d09c99b7736aa3ff0531b5c343e0a8a13d1f0847814c3"} Feb 16 00:32:10 crc kubenswrapper[4698]: I0216 00:32:10.419244 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adaea055f93a1bdf087d09c99b7736aa3ff0531b5c343e0a8a13d1f0847814c3" Feb 16 00:32:10 crc kubenswrapper[4698]: I0216 00:32:10.419219 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 16 00:32:10 crc kubenswrapper[4698]: I0216 00:32:10.474875 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs4g8\" (UniqueName: \"kubernetes.io/projected/babfc81d-4839-4409-906c-d155bcc4b569-kube-api-access-bs4g8\") on node \"crc\" DevicePath \"\"" Feb 16 00:32:10 crc kubenswrapper[4698]: I0216 00:32:10.616656 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-gtdjr_3137c83d-63b6-4a67-8ada-535b0b55ff6e/prometheus-webhook-snmp/0.log" Feb 16 00:32:11 crc kubenswrapper[4698]: I0216 00:32:11.741270 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5n7js"] Feb 16 00:32:11 crc kubenswrapper[4698]: E0216 00:32:11.741868 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="babfc81d-4839-4409-906c-d155bcc4b569" containerName="curl" Feb 16 00:32:11 crc kubenswrapper[4698]: I0216 00:32:11.741882 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="babfc81d-4839-4409-906c-d155bcc4b569" containerName="curl" Feb 16 00:32:11 crc kubenswrapper[4698]: I0216 00:32:11.741998 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="babfc81d-4839-4409-906c-d155bcc4b569" containerName="curl" Feb 16 00:32:11 crc kubenswrapper[4698]: I0216 00:32:11.742854 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5n7js" Feb 16 00:32:11 crc kubenswrapper[4698]: I0216 00:32:11.754359 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5n7js"] Feb 16 00:32:11 crc kubenswrapper[4698]: I0216 00:32:11.894778 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vs9g\" (UniqueName: \"kubernetes.io/projected/e4803994-19de-4065-84a1-c0ac730cfbab-kube-api-access-6vs9g\") pod \"community-operators-5n7js\" (UID: \"e4803994-19de-4065-84a1-c0ac730cfbab\") " pod="openshift-marketplace/community-operators-5n7js" Feb 16 00:32:11 crc kubenswrapper[4698]: I0216 00:32:11.894839 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4803994-19de-4065-84a1-c0ac730cfbab-catalog-content\") pod \"community-operators-5n7js\" (UID: \"e4803994-19de-4065-84a1-c0ac730cfbab\") " pod="openshift-marketplace/community-operators-5n7js" Feb 16 00:32:11 crc kubenswrapper[4698]: I0216 00:32:11.895018 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4803994-19de-4065-84a1-c0ac730cfbab-utilities\") pod \"community-operators-5n7js\" (UID: \"e4803994-19de-4065-84a1-c0ac730cfbab\") " pod="openshift-marketplace/community-operators-5n7js" Feb 16 00:32:11 crc kubenswrapper[4698]: I0216 00:32:11.996513 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4803994-19de-4065-84a1-c0ac730cfbab-catalog-content\") pod \"community-operators-5n7js\" (UID: \"e4803994-19de-4065-84a1-c0ac730cfbab\") " pod="openshift-marketplace/community-operators-5n7js" Feb 16 00:32:11 crc kubenswrapper[4698]: I0216 00:32:11.996835 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4803994-19de-4065-84a1-c0ac730cfbab-utilities\") pod \"community-operators-5n7js\" (UID: \"e4803994-19de-4065-84a1-c0ac730cfbab\") " pod="openshift-marketplace/community-operators-5n7js" Feb 16 00:32:11 crc kubenswrapper[4698]: I0216 00:32:11.996974 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vs9g\" (UniqueName: \"kubernetes.io/projected/e4803994-19de-4065-84a1-c0ac730cfbab-kube-api-access-6vs9g\") pod \"community-operators-5n7js\" (UID: \"e4803994-19de-4065-84a1-c0ac730cfbab\") " pod="openshift-marketplace/community-operators-5n7js" Feb 16 00:32:11 crc kubenswrapper[4698]: I0216 00:32:11.997141 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4803994-19de-4065-84a1-c0ac730cfbab-catalog-content\") pod \"community-operators-5n7js\" (UID: \"e4803994-19de-4065-84a1-c0ac730cfbab\") " pod="openshift-marketplace/community-operators-5n7js" Feb 16 00:32:11 crc kubenswrapper[4698]: I0216 00:32:11.997711 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4803994-19de-4065-84a1-c0ac730cfbab-utilities\") pod \"community-operators-5n7js\" (UID: \"e4803994-19de-4065-84a1-c0ac730cfbab\") " pod="openshift-marketplace/community-operators-5n7js" Feb 16 00:32:12 crc kubenswrapper[4698]: I0216 00:32:12.020662 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vs9g\" (UniqueName: \"kubernetes.io/projected/e4803994-19de-4065-84a1-c0ac730cfbab-kube-api-access-6vs9g\") pod \"community-operators-5n7js\" (UID: \"e4803994-19de-4065-84a1-c0ac730cfbab\") " pod="openshift-marketplace/community-operators-5n7js" Feb 16 00:32:12 crc kubenswrapper[4698]: I0216 00:32:12.067044 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5n7js" Feb 16 00:32:12 crc kubenswrapper[4698]: I0216 00:32:12.344305 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5n7js"] Feb 16 00:32:12 crc kubenswrapper[4698]: I0216 00:32:12.439481 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-779sn" event={"ID":"08930fbb-e669-42c6-a2b1-e36b32415a75","Type":"ContainerStarted","Data":"985522c5f46d5f7c855c1b85f52b7139a07921340dccbfe2d9484ad73944252e"} Feb 16 00:32:12 crc kubenswrapper[4698]: I0216 00:32:12.444917 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj" event={"ID":"ee87103e-a39d-4f01-9843-26056d8805a6","Type":"ContainerStarted","Data":"618c8f211a98ea00ec25bb446c932f4e90f6b9b7bacd0d86b133cc467868154d"} Feb 16 00:32:12 crc kubenswrapper[4698]: I0216 00:32:12.446268 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n7js" event={"ID":"e4803994-19de-4065-84a1-c0ac730cfbab","Type":"ContainerStarted","Data":"f66259268114bd88093b0a9ab7af5cb385bdc79acc069427539abd0fe3efd01f"} Feb 16 00:32:12 crc kubenswrapper[4698]: I0216 00:32:12.448087 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-9mg4g" event={"ID":"bfb61d19-7afa-44cc-acd1-4a1de2985850","Type":"ContainerStarted","Data":"7610afb0914efd3cc726b5c523cb15f237092d6a1a77fd17c1ef535775305a63"} Feb 16 00:32:12 crc kubenswrapper[4698]: I0216 00:32:12.451027 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf" event={"ID":"8de7b9dc-c355-4830-a8c4-397a66fea53b","Type":"ContainerStarted","Data":"203ded903c426d941bdf3bb32de17e6472fcfb52d891b70b702146f44fc0dc24"} Feb 16 00:32:12 crc kubenswrapper[4698]: I0216 00:32:12.456131 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq" event={"ID":"006f7f53-12d8-4372-9a72-d7ed8e42a53f","Type":"ContainerStarted","Data":"987d76fc55f503d7a437740c0f4509cfa2812bcb1447874e6e8bf35411a501ed"} Feb 16 00:32:13 crc kubenswrapper[4698]: I0216 00:32:13.465851 4698 generic.go:334] "Generic (PLEG): container finished" podID="e4803994-19de-4065-84a1-c0ac730cfbab" containerID="d47f1e08295df1bb4f9282d9f2d2a30c949fa8ceb88e2e862ad8b9ddfd04b51f" exitCode=0 Feb 16 00:32:13 crc kubenswrapper[4698]: I0216 00:32:13.465901 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n7js" event={"ID":"e4803994-19de-4065-84a1-c0ac730cfbab","Type":"ContainerDied","Data":"d47f1e08295df1bb4f9282d9f2d2a30c949fa8ceb88e2e862ad8b9ddfd04b51f"} Feb 16 00:32:22 crc kubenswrapper[4698]: I0216 00:32:22.232630 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:32:22 crc kubenswrapper[4698]: E0216 00:32:22.233459 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:32:23 crc kubenswrapper[4698]: I0216 00:32:23.543278 4698 generic.go:334] "Generic (PLEG): container finished" podID="e4803994-19de-4065-84a1-c0ac730cfbab" containerID="eeed675ca14be2e63187329b908ff7f7a7f32cd99bbef791ca47f9188f480456" exitCode=0 Feb 16 00:32:23 crc kubenswrapper[4698]: I0216 00:32:23.543468 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n7js" event={"ID":"e4803994-19de-4065-84a1-c0ac730cfbab","Type":"ContainerDied","Data":"eeed675ca14be2e63187329b908ff7f7a7f32cd99bbef791ca47f9188f480456"} Feb 16 00:32:23 crc kubenswrapper[4698]: I0216 00:32:23.547014 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-9mg4g" event={"ID":"bfb61d19-7afa-44cc-acd1-4a1de2985850","Type":"ContainerStarted","Data":"6882dbe48682c71ae60659d30ddbb48366f7e03cdcbba2054b3cedaf5d1d0aca"} Feb 16 00:32:23 crc kubenswrapper[4698]: I0216 00:32:23.603385 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-9mg4g" podStartSLOduration=2.036192201 podStartE2EDuration="23.603362877s" podCreationTimestamp="2026-02-16 00:32:00 +0000 UTC" firstStartedPulling="2026-02-16 00:32:01.232672319 +0000 UTC m=+1530.890571081" lastFinishedPulling="2026-02-16 00:32:22.799842965 +0000 UTC m=+1552.457741757" observedRunningTime="2026-02-16 00:32:23.60024299 +0000 UTC m=+1553.258141752" watchObservedRunningTime="2026-02-16 00:32:23.603362877 +0000 UTC m=+1553.261261639" Feb 16 00:32:24 crc kubenswrapper[4698]: I0216 00:32:24.559130 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n7js" event={"ID":"e4803994-19de-4065-84a1-c0ac730cfbab","Type":"ContainerStarted","Data":"07a462874111bebc6d662d5a339fd260562e3631fcfbdac59ffaa05fe7a98c4a"} Feb 16 00:32:24 crc kubenswrapper[4698]: I0216 00:32:24.586483 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5n7js" podStartSLOduration=3.117757174 podStartE2EDuration="13.586469328s" podCreationTimestamp="2026-02-16 00:32:11 +0000 UTC" firstStartedPulling="2026-02-16 00:32:13.467290083 +0000 UTC m=+1543.125188845" lastFinishedPulling="2026-02-16 00:32:23.936002207 +0000 UTC m=+1553.593900999" observedRunningTime="2026-02-16 00:32:24.582868865 +0000 UTC m=+1554.240767627" watchObservedRunningTime="2026-02-16 00:32:24.586469328 +0000 UTC m=+1554.244368090" Feb 16 00:32:32 crc kubenswrapper[4698]: I0216 00:32:32.067667 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5n7js" Feb 16 00:32:32 crc kubenswrapper[4698]: I0216 00:32:32.068802 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5n7js" Feb 16 00:32:32 crc kubenswrapper[4698]: I0216 00:32:32.125076 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5n7js" Feb 16 00:32:32 crc kubenswrapper[4698]: I0216 00:32:32.660446 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5n7js" Feb 16 00:32:32 crc kubenswrapper[4698]: I0216 00:32:32.713994 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5n7js"] Feb 16 00:32:34 crc kubenswrapper[4698]: I0216 00:32:34.634997 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5n7js" podUID="e4803994-19de-4065-84a1-c0ac730cfbab" containerName="registry-server" containerID="cri-o://07a462874111bebc6d662d5a339fd260562e3631fcfbdac59ffaa05fe7a98c4a" gracePeriod=2 Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.050748 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5n7js" Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.154219 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4803994-19de-4065-84a1-c0ac730cfbab-utilities\") pod \"e4803994-19de-4065-84a1-c0ac730cfbab\" (UID: \"e4803994-19de-4065-84a1-c0ac730cfbab\") " Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.154270 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vs9g\" (UniqueName: \"kubernetes.io/projected/e4803994-19de-4065-84a1-c0ac730cfbab-kube-api-access-6vs9g\") pod \"e4803994-19de-4065-84a1-c0ac730cfbab\" (UID: \"e4803994-19de-4065-84a1-c0ac730cfbab\") " Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.154366 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4803994-19de-4065-84a1-c0ac730cfbab-catalog-content\") pod \"e4803994-19de-4065-84a1-c0ac730cfbab\" (UID: \"e4803994-19de-4065-84a1-c0ac730cfbab\") " Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.155843 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4803994-19de-4065-84a1-c0ac730cfbab-utilities" (OuterVolumeSpecName: "utilities") pod "e4803994-19de-4065-84a1-c0ac730cfbab" (UID: "e4803994-19de-4065-84a1-c0ac730cfbab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.160710 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4803994-19de-4065-84a1-c0ac730cfbab-kube-api-access-6vs9g" (OuterVolumeSpecName: "kube-api-access-6vs9g") pod "e4803994-19de-4065-84a1-c0ac730cfbab" (UID: "e4803994-19de-4065-84a1-c0ac730cfbab"). InnerVolumeSpecName "kube-api-access-6vs9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.213802 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4803994-19de-4065-84a1-c0ac730cfbab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4803994-19de-4065-84a1-c0ac730cfbab" (UID: "e4803994-19de-4065-84a1-c0ac730cfbab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.256070 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4803994-19de-4065-84a1-c0ac730cfbab-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.256126 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vs9g\" (UniqueName: \"kubernetes.io/projected/e4803994-19de-4065-84a1-c0ac730cfbab-kube-api-access-6vs9g\") on node \"crc\" DevicePath \"\"" Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.256146 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4803994-19de-4065-84a1-c0ac730cfbab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.646686 4698 generic.go:334] "Generic (PLEG): container finished" podID="e4803994-19de-4065-84a1-c0ac730cfbab" containerID="07a462874111bebc6d662d5a339fd260562e3631fcfbdac59ffaa05fe7a98c4a" exitCode=0 Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.646732 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n7js" event={"ID":"e4803994-19de-4065-84a1-c0ac730cfbab","Type":"ContainerDied","Data":"07a462874111bebc6d662d5a339fd260562e3631fcfbdac59ffaa05fe7a98c4a"} Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.646807 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n7js" event={"ID":"e4803994-19de-4065-84a1-c0ac730cfbab","Type":"ContainerDied","Data":"f66259268114bd88093b0a9ab7af5cb385bdc79acc069427539abd0fe3efd01f"} Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.646829 4698 scope.go:117] "RemoveContainer" containerID="07a462874111bebc6d662d5a339fd260562e3631fcfbdac59ffaa05fe7a98c4a" Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.649418 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5n7js" Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.667857 4698 scope.go:117] "RemoveContainer" containerID="eeed675ca14be2e63187329b908ff7f7a7f32cd99bbef791ca47f9188f480456" Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.679504 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5n7js"] Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.692109 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5n7js"] Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.695681 4698 scope.go:117] "RemoveContainer" containerID="d47f1e08295df1bb4f9282d9f2d2a30c949fa8ceb88e2e862ad8b9ddfd04b51f" Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.719018 4698 scope.go:117] "RemoveContainer" containerID="07a462874111bebc6d662d5a339fd260562e3631fcfbdac59ffaa05fe7a98c4a" Feb 16 00:32:35 crc kubenswrapper[4698]: E0216 00:32:35.719652 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07a462874111bebc6d662d5a339fd260562e3631fcfbdac59ffaa05fe7a98c4a\": container with ID starting with 07a462874111bebc6d662d5a339fd260562e3631fcfbdac59ffaa05fe7a98c4a not found: ID does not exist" containerID="07a462874111bebc6d662d5a339fd260562e3631fcfbdac59ffaa05fe7a98c4a" Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.719679 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07a462874111bebc6d662d5a339fd260562e3631fcfbdac59ffaa05fe7a98c4a"} err="failed to get container status \"07a462874111bebc6d662d5a339fd260562e3631fcfbdac59ffaa05fe7a98c4a\": rpc error: code = NotFound desc = could not find container \"07a462874111bebc6d662d5a339fd260562e3631fcfbdac59ffaa05fe7a98c4a\": container with ID starting with 07a462874111bebc6d662d5a339fd260562e3631fcfbdac59ffaa05fe7a98c4a not found: ID does not exist" Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.719712 4698 scope.go:117] "RemoveContainer" containerID="eeed675ca14be2e63187329b908ff7f7a7f32cd99bbef791ca47f9188f480456" Feb 16 00:32:35 crc kubenswrapper[4698]: E0216 00:32:35.720153 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeed675ca14be2e63187329b908ff7f7a7f32cd99bbef791ca47f9188f480456\": container with ID starting with eeed675ca14be2e63187329b908ff7f7a7f32cd99bbef791ca47f9188f480456 not found: ID does not exist" containerID="eeed675ca14be2e63187329b908ff7f7a7f32cd99bbef791ca47f9188f480456" Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.720224 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeed675ca14be2e63187329b908ff7f7a7f32cd99bbef791ca47f9188f480456"} err="failed to get container status \"eeed675ca14be2e63187329b908ff7f7a7f32cd99bbef791ca47f9188f480456\": rpc error: code = NotFound desc = could not find container \"eeed675ca14be2e63187329b908ff7f7a7f32cd99bbef791ca47f9188f480456\": container with ID starting with eeed675ca14be2e63187329b908ff7f7a7f32cd99bbef791ca47f9188f480456 not found: ID does not exist" Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.720272 4698 scope.go:117] "RemoveContainer" containerID="d47f1e08295df1bb4f9282d9f2d2a30c949fa8ceb88e2e862ad8b9ddfd04b51f" Feb 16 00:32:35 crc kubenswrapper[4698]: E0216 00:32:35.720644 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d47f1e08295df1bb4f9282d9f2d2a30c949fa8ceb88e2e862ad8b9ddfd04b51f\": container with ID starting with d47f1e08295df1bb4f9282d9f2d2a30c949fa8ceb88e2e862ad8b9ddfd04b51f not found: ID does not exist" containerID="d47f1e08295df1bb4f9282d9f2d2a30c949fa8ceb88e2e862ad8b9ddfd04b51f" Feb 16 00:32:35 crc kubenswrapper[4698]: I0216 00:32:35.720667 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d47f1e08295df1bb4f9282d9f2d2a30c949fa8ceb88e2e862ad8b9ddfd04b51f"} err="failed to get container status \"d47f1e08295df1bb4f9282d9f2d2a30c949fa8ceb88e2e862ad8b9ddfd04b51f\": rpc error: code = NotFound desc = could not find container \"d47f1e08295df1bb4f9282d9f2d2a30c949fa8ceb88e2e862ad8b9ddfd04b51f\": container with ID starting with d47f1e08295df1bb4f9282d9f2d2a30c949fa8ceb88e2e862ad8b9ddfd04b51f not found: ID does not exist" Feb 16 00:32:37 crc kubenswrapper[4698]: I0216 00:32:37.232125 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:32:37 crc kubenswrapper[4698]: E0216 00:32:37.232352 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:32:37 crc kubenswrapper[4698]: I0216 00:32:37.242797 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4803994-19de-4065-84a1-c0ac730cfbab" path="/var/lib/kubelet/pods/e4803994-19de-4065-84a1-c0ac730cfbab/volumes" Feb 16 00:32:40 crc kubenswrapper[4698]: I0216 00:32:40.786344 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-gtdjr_3137c83d-63b6-4a67-8ada-535b0b55ff6e/prometheus-webhook-snmp/0.log" Feb 16 00:32:45 crc kubenswrapper[4698]: I0216 00:32:45.729814 4698 generic.go:334] "Generic (PLEG): container finished" podID="bfb61d19-7afa-44cc-acd1-4a1de2985850" containerID="7610afb0914efd3cc726b5c523cb15f237092d6a1a77fd17c1ef535775305a63" exitCode=0 Feb 16 00:32:45 crc kubenswrapper[4698]: I0216 00:32:45.729875 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-9mg4g" event={"ID":"bfb61d19-7afa-44cc-acd1-4a1de2985850","Type":"ContainerDied","Data":"7610afb0914efd3cc726b5c523cb15f237092d6a1a77fd17c1ef535775305a63"} Feb 16 00:32:45 crc kubenswrapper[4698]: I0216 00:32:45.730769 4698 scope.go:117] "RemoveContainer" containerID="7610afb0914efd3cc726b5c523cb15f237092d6a1a77fd17c1ef535775305a63" Feb 16 00:32:50 crc kubenswrapper[4698]: I0216 00:32:50.231154 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:32:50 crc kubenswrapper[4698]: E0216 00:32:50.231775 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:32:54 crc kubenswrapper[4698]: I0216 00:32:54.822163 4698 generic.go:334] "Generic (PLEG): container finished" podID="bfb61d19-7afa-44cc-acd1-4a1de2985850" containerID="6882dbe48682c71ae60659d30ddbb48366f7e03cdcbba2054b3cedaf5d1d0aca" exitCode=0 Feb 16 00:32:54 crc kubenswrapper[4698]: I0216 00:32:54.822216 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-9mg4g" event={"ID":"bfb61d19-7afa-44cc-acd1-4a1de2985850","Type":"ContainerDied","Data":"6882dbe48682c71ae60659d30ddbb48366f7e03cdcbba2054b3cedaf5d1d0aca"} Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.103889 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.125667 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-collectd-entrypoint-script\") pod \"bfb61d19-7afa-44cc-acd1-4a1de2985850\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.125722 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-ceilometer-entrypoint-script\") pod \"bfb61d19-7afa-44cc-acd1-4a1de2985850\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.158587 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "bfb61d19-7afa-44cc-acd1-4a1de2985850" (UID: "bfb61d19-7afa-44cc-acd1-4a1de2985850"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.164650 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "bfb61d19-7afa-44cc-acd1-4a1de2985850" (UID: "bfb61d19-7afa-44cc-acd1-4a1de2985850"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.226411 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-healthcheck-log\") pod \"bfb61d19-7afa-44cc-acd1-4a1de2985850\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.226462 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-collectd-config\") pod \"bfb61d19-7afa-44cc-acd1-4a1de2985850\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.226503 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-ceilometer-publisher\") pod \"bfb61d19-7afa-44cc-acd1-4a1de2985850\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.226551 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c24zx\" (UniqueName: \"kubernetes.io/projected/bfb61d19-7afa-44cc-acd1-4a1de2985850-kube-api-access-c24zx\") pod \"bfb61d19-7afa-44cc-acd1-4a1de2985850\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.226588 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-sensubility-config\") pod \"bfb61d19-7afa-44cc-acd1-4a1de2985850\" (UID: \"bfb61d19-7afa-44cc-acd1-4a1de2985850\") " Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.226855 4698 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.226866 4698 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.232250 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfb61d19-7afa-44cc-acd1-4a1de2985850-kube-api-access-c24zx" (OuterVolumeSpecName: "kube-api-access-c24zx") pod "bfb61d19-7afa-44cc-acd1-4a1de2985850" (UID: "bfb61d19-7afa-44cc-acd1-4a1de2985850"). InnerVolumeSpecName "kube-api-access-c24zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.252727 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "bfb61d19-7afa-44cc-acd1-4a1de2985850" (UID: "bfb61d19-7afa-44cc-acd1-4a1de2985850"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.252869 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "bfb61d19-7afa-44cc-acd1-4a1de2985850" (UID: "bfb61d19-7afa-44cc-acd1-4a1de2985850"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.253010 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "bfb61d19-7afa-44cc-acd1-4a1de2985850" (UID: "bfb61d19-7afa-44cc-acd1-4a1de2985850"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.258925 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "bfb61d19-7afa-44cc-acd1-4a1de2985850" (UID: "bfb61d19-7afa-44cc-acd1-4a1de2985850"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.328274 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c24zx\" (UniqueName: \"kubernetes.io/projected/bfb61d19-7afa-44cc-acd1-4a1de2985850-kube-api-access-c24zx\") on node \"crc\" DevicePath \"\"" Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.328321 4698 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-sensubility-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.328337 4698 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-healthcheck-log\") on node \"crc\" DevicePath \"\"" Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.328349 4698 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-collectd-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.328365 4698 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/bfb61d19-7afa-44cc-acd1-4a1de2985850-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.841163 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-9mg4g" event={"ID":"bfb61d19-7afa-44cc-acd1-4a1de2985850","Type":"ContainerDied","Data":"a839bef7ab4c328c314a198c46a8de8b77bf7e85dfae1d673ee399ad0a841a93"} Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.841236 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a839bef7ab4c328c314a198c46a8de8b77bf7e85dfae1d673ee399ad0a841a93" Feb 16 00:32:56 crc kubenswrapper[4698]: I0216 00:32:56.841365 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-9mg4g" Feb 16 00:32:58 crc kubenswrapper[4698]: I0216 00:32:58.153180 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-9mg4g_bfb61d19-7afa-44cc-acd1-4a1de2985850/smoketest-collectd/0.log" Feb 16 00:32:58 crc kubenswrapper[4698]: I0216 00:32:58.400526 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-9mg4g_bfb61d19-7afa-44cc-acd1-4a1de2985850/smoketest-ceilometer/0.log" Feb 16 00:32:58 crc kubenswrapper[4698]: I0216 00:32:58.680333 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-z7c5h_047df03c-6047-4376-a3c5-4d5c734da56a/default-interconnect/0.log" Feb 16 00:32:58 crc kubenswrapper[4698]: I0216 00:32:58.958600 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf_8de7b9dc-c355-4830-a8c4-397a66fea53b/bridge/2.log" Feb 16 00:32:59 crc kubenswrapper[4698]: I0216 00:32:59.234360 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-v7xsf_8de7b9dc-c355-4830-a8c4-397a66fea53b/sg-core/0.log" Feb 16 00:32:59 crc kubenswrapper[4698]: I0216 00:32:59.540168 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq_006f7f53-12d8-4372-9a72-d7ed8e42a53f/bridge/2.log" Feb 16 00:32:59 crc kubenswrapper[4698]: I0216 00:32:59.891918 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-59f7dd55c5-9n4lq_006f7f53-12d8-4372-9a72-d7ed8e42a53f/sg-core/0.log" Feb 16 00:33:00 crc kubenswrapper[4698]: I0216 00:33:00.147214 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj_ee87103e-a39d-4f01-9843-26056d8805a6/bridge/2.log" Feb 16 00:33:00 crc kubenswrapper[4698]: I0216 00:33:00.400181 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-rv5rj_ee87103e-a39d-4f01-9843-26056d8805a6/sg-core/0.log" Feb 16 00:33:00 crc kubenswrapper[4698]: I0216 00:33:00.731066 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-57487576b-276hm_b4421358-74ea-4070-b840-e5fb3fa20124/bridge/2.log" Feb 16 00:33:01 crc kubenswrapper[4698]: I0216 00:33:01.044909 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-57487576b-276hm_b4421358-74ea-4070-b840-e5fb3fa20124/sg-core/0.log" Feb 16 00:33:01 crc kubenswrapper[4698]: I0216 00:33:01.344878 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-779sn_08930fbb-e669-42c6-a2b1-e36b32415a75/bridge/2.log" Feb 16 00:33:01 crc kubenswrapper[4698]: I0216 00:33:01.595940 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-779sn_08930fbb-e669-42c6-a2b1-e36b32415a75/sg-core/0.log" Feb 16 00:33:05 crc kubenswrapper[4698]: I0216 00:33:05.044721 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-6f55f6c4c5-56jtc_b896217b-c297-4228-8940-d2e0a2f7547f/operator/0.log" Feb 16 00:33:05 crc kubenswrapper[4698]: I0216 00:33:05.231687 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:33:05 crc kubenswrapper[4698]: E0216 00:33:05.232078 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:33:05 crc kubenswrapper[4698]: I0216 00:33:05.372557 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_60f89283-c44b-475b-a87a-2471155ac745/prometheus/0.log" Feb 16 00:33:05 crc kubenswrapper[4698]: I0216 00:33:05.929777 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_d76004b3-8be8-40f4-be5e-e9a792bebce1/elasticsearch/0.log" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.027393 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t2bd5"] Feb 16 00:33:06 crc kubenswrapper[4698]: E0216 00:33:06.028443 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4803994-19de-4065-84a1-c0ac730cfbab" containerName="extract-content" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.028472 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4803994-19de-4065-84a1-c0ac730cfbab" containerName="extract-content" Feb 16 00:33:06 crc kubenswrapper[4698]: E0216 00:33:06.028491 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb61d19-7afa-44cc-acd1-4a1de2985850" containerName="smoketest-collectd" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.028501 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb61d19-7afa-44cc-acd1-4a1de2985850" containerName="smoketest-collectd" Feb 16 00:33:06 crc kubenswrapper[4698]: E0216 00:33:06.028522 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb61d19-7afa-44cc-acd1-4a1de2985850" containerName="smoketest-ceilometer" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.028532 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb61d19-7afa-44cc-acd1-4a1de2985850" containerName="smoketest-ceilometer" Feb 16 00:33:06 crc kubenswrapper[4698]: E0216 00:33:06.028552 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4803994-19de-4065-84a1-c0ac730cfbab" containerName="extract-utilities" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.028561 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4803994-19de-4065-84a1-c0ac730cfbab" containerName="extract-utilities" Feb 16 00:33:06 crc kubenswrapper[4698]: E0216 00:33:06.028574 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4803994-19de-4065-84a1-c0ac730cfbab" containerName="registry-server" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.028584 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4803994-19de-4065-84a1-c0ac730cfbab" containerName="registry-server" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.028825 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb61d19-7afa-44cc-acd1-4a1de2985850" containerName="smoketest-collectd" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.028851 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb61d19-7afa-44cc-acd1-4a1de2985850" containerName="smoketest-ceilometer" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.028868 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4803994-19de-4065-84a1-c0ac730cfbab" containerName="registry-server" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.030092 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2bd5" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.035843 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t2bd5"] Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.169569 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frcdv\" (UniqueName: \"kubernetes.io/projected/93bcbf37-570a-4f40-bbd0-a7a080d0b0a1-kube-api-access-frcdv\") pod \"certified-operators-t2bd5\" (UID: \"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1\") " pod="openshift-marketplace/certified-operators-t2bd5" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.169658 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93bcbf37-570a-4f40-bbd0-a7a080d0b0a1-catalog-content\") pod \"certified-operators-t2bd5\" (UID: \"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1\") " pod="openshift-marketplace/certified-operators-t2bd5" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.169693 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93bcbf37-570a-4f40-bbd0-a7a080d0b0a1-utilities\") pod \"certified-operators-t2bd5\" (UID: \"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1\") " pod="openshift-marketplace/certified-operators-t2bd5" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.214353 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-gtdjr_3137c83d-63b6-4a67-8ada-535b0b55ff6e/prometheus-webhook-snmp/0.log" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.270626 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frcdv\" (UniqueName: \"kubernetes.io/projected/93bcbf37-570a-4f40-bbd0-a7a080d0b0a1-kube-api-access-frcdv\") pod \"certified-operators-t2bd5\" (UID: \"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1\") " pod="openshift-marketplace/certified-operators-t2bd5" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.270705 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93bcbf37-570a-4f40-bbd0-a7a080d0b0a1-catalog-content\") pod \"certified-operators-t2bd5\" (UID: \"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1\") " pod="openshift-marketplace/certified-operators-t2bd5" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.270738 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93bcbf37-570a-4f40-bbd0-a7a080d0b0a1-utilities\") pod \"certified-operators-t2bd5\" (UID: \"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1\") " pod="openshift-marketplace/certified-operators-t2bd5" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.271253 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93bcbf37-570a-4f40-bbd0-a7a080d0b0a1-utilities\") pod \"certified-operators-t2bd5\" (UID: \"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1\") " pod="openshift-marketplace/certified-operators-t2bd5" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.271580 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93bcbf37-570a-4f40-bbd0-a7a080d0b0a1-catalog-content\") pod \"certified-operators-t2bd5\" (UID: \"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1\") " pod="openshift-marketplace/certified-operators-t2bd5" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.292373 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frcdv\" (UniqueName: \"kubernetes.io/projected/93bcbf37-570a-4f40-bbd0-a7a080d0b0a1-kube-api-access-frcdv\") pod \"certified-operators-t2bd5\" (UID: \"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1\") " pod="openshift-marketplace/certified-operators-t2bd5" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.347385 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2bd5" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.552332 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_1e7998f1-4de5-475f-9c43-a9beba750f02/alertmanager/0.log" Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.813743 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t2bd5"] Feb 16 00:33:06 crc kubenswrapper[4698]: I0216 00:33:06.939496 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2bd5" event={"ID":"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1","Type":"ContainerStarted","Data":"036b06ec8b7df722a7dfad67b37a34fb2348aa82ad28c46323875514f7cf8aed"} Feb 16 00:33:07 crc kubenswrapper[4698]: I0216 00:33:07.951193 4698 generic.go:334] "Generic (PLEG): container finished" podID="93bcbf37-570a-4f40-bbd0-a7a080d0b0a1" containerID="0dacef1cdb92c9ad0f239d5a507de94ef589a3ff7a48acfc6cdd6ed244f6639f" exitCode=0 Feb 16 00:33:07 crc kubenswrapper[4698]: I0216 00:33:07.951275 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2bd5" event={"ID":"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1","Type":"ContainerDied","Data":"0dacef1cdb92c9ad0f239d5a507de94ef589a3ff7a48acfc6cdd6ed244f6639f"} Feb 16 00:33:09 crc kubenswrapper[4698]: I0216 00:33:09.971727 4698 generic.go:334] "Generic (PLEG): container finished" podID="93bcbf37-570a-4f40-bbd0-a7a080d0b0a1" containerID="9d991030e86d0af72cfcbf90f1abdcfdf9329f7908145d63f205c37601a37269" exitCode=0 Feb 16 00:33:09 crc kubenswrapper[4698]: I0216 00:33:09.971818 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2bd5" event={"ID":"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1","Type":"ContainerDied","Data":"9d991030e86d0af72cfcbf90f1abdcfdf9329f7908145d63f205c37601a37269"} Feb 16 00:33:10 crc kubenswrapper[4698]: I0216 00:33:10.982869 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2bd5" event={"ID":"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1","Type":"ContainerStarted","Data":"27fdb6610c495ece8f474794c21e04db836b67d23aefc43c2fec5da2e3cd47fc"} Feb 16 00:33:11 crc kubenswrapper[4698]: I0216 00:33:11.008652 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t2bd5" podStartSLOduration=3.374515789 podStartE2EDuration="6.008601601s" podCreationTimestamp="2026-02-16 00:33:05 +0000 UTC" firstStartedPulling="2026-02-16 00:33:07.953074846 +0000 UTC m=+1597.610973638" lastFinishedPulling="2026-02-16 00:33:10.587160688 +0000 UTC m=+1600.245059450" observedRunningTime="2026-02-16 00:33:11.002949055 +0000 UTC m=+1600.660847857" watchObservedRunningTime="2026-02-16 00:33:11.008601601 +0000 UTC m=+1600.666500403" Feb 16 00:33:16 crc kubenswrapper[4698]: I0216 00:33:16.347933 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t2bd5" Feb 16 00:33:16 crc kubenswrapper[4698]: I0216 00:33:16.348294 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t2bd5" Feb 16 00:33:16 crc kubenswrapper[4698]: I0216 00:33:16.389512 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t2bd5" Feb 16 00:33:17 crc kubenswrapper[4698]: I0216 00:33:17.083262 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t2bd5" Feb 16 00:33:17 crc kubenswrapper[4698]: I0216 00:33:17.134538 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t2bd5"] Feb 16 00:33:18 crc kubenswrapper[4698]: I0216 00:33:18.232088 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:33:18 crc kubenswrapper[4698]: E0216 00:33:18.232847 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:33:19 crc kubenswrapper[4698]: I0216 00:33:19.042664 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t2bd5" podUID="93bcbf37-570a-4f40-bbd0-a7a080d0b0a1" containerName="registry-server" containerID="cri-o://27fdb6610c495ece8f474794c21e04db836b67d23aefc43c2fec5da2e3cd47fc" gracePeriod=2 Feb 16 00:33:19 crc kubenswrapper[4698]: I0216 00:33:19.528232 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2bd5" Feb 16 00:33:19 crc kubenswrapper[4698]: I0216 00:33:19.689496 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93bcbf37-570a-4f40-bbd0-a7a080d0b0a1-catalog-content\") pod \"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1\" (UID: \"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1\") " Feb 16 00:33:19 crc kubenswrapper[4698]: I0216 00:33:19.689574 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frcdv\" (UniqueName: \"kubernetes.io/projected/93bcbf37-570a-4f40-bbd0-a7a080d0b0a1-kube-api-access-frcdv\") pod \"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1\" (UID: \"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1\") " Feb 16 00:33:19 crc kubenswrapper[4698]: I0216 00:33:19.689788 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93bcbf37-570a-4f40-bbd0-a7a080d0b0a1-utilities\") pod \"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1\" (UID: \"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1\") " Feb 16 00:33:19 crc kubenswrapper[4698]: I0216 00:33:19.691324 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93bcbf37-570a-4f40-bbd0-a7a080d0b0a1-utilities" (OuterVolumeSpecName: "utilities") pod "93bcbf37-570a-4f40-bbd0-a7a080d0b0a1" (UID: "93bcbf37-570a-4f40-bbd0-a7a080d0b0a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:33:19 crc kubenswrapper[4698]: I0216 00:33:19.701906 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93bcbf37-570a-4f40-bbd0-a7a080d0b0a1-kube-api-access-frcdv" (OuterVolumeSpecName: "kube-api-access-frcdv") pod "93bcbf37-570a-4f40-bbd0-a7a080d0b0a1" (UID: "93bcbf37-570a-4f40-bbd0-a7a080d0b0a1"). InnerVolumeSpecName "kube-api-access-frcdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:33:19 crc kubenswrapper[4698]: I0216 00:33:19.792146 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frcdv\" (UniqueName: \"kubernetes.io/projected/93bcbf37-570a-4f40-bbd0-a7a080d0b0a1-kube-api-access-frcdv\") on node \"crc\" DevicePath \"\"" Feb 16 00:33:19 crc kubenswrapper[4698]: I0216 00:33:19.792201 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93bcbf37-570a-4f40-bbd0-a7a080d0b0a1-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:33:20 crc kubenswrapper[4698]: I0216 00:33:20.006894 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93bcbf37-570a-4f40-bbd0-a7a080d0b0a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93bcbf37-570a-4f40-bbd0-a7a080d0b0a1" (UID: "93bcbf37-570a-4f40-bbd0-a7a080d0b0a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:33:20 crc kubenswrapper[4698]: I0216 00:33:20.054674 4698 generic.go:334] "Generic (PLEG): container finished" podID="93bcbf37-570a-4f40-bbd0-a7a080d0b0a1" containerID="27fdb6610c495ece8f474794c21e04db836b67d23aefc43c2fec5da2e3cd47fc" exitCode=0 Feb 16 00:33:20 crc kubenswrapper[4698]: I0216 00:33:20.054722 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2bd5" event={"ID":"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1","Type":"ContainerDied","Data":"27fdb6610c495ece8f474794c21e04db836b67d23aefc43c2fec5da2e3cd47fc"} Feb 16 00:33:20 crc kubenswrapper[4698]: I0216 00:33:20.054803 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2bd5" Feb 16 00:33:20 crc kubenswrapper[4698]: I0216 00:33:20.054830 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2bd5" event={"ID":"93bcbf37-570a-4f40-bbd0-a7a080d0b0a1","Type":"ContainerDied","Data":"036b06ec8b7df722a7dfad67b37a34fb2348aa82ad28c46323875514f7cf8aed"} Feb 16 00:33:20 crc kubenswrapper[4698]: I0216 00:33:20.054863 4698 scope.go:117] "RemoveContainer" containerID="27fdb6610c495ece8f474794c21e04db836b67d23aefc43c2fec5da2e3cd47fc" Feb 16 00:33:20 crc kubenswrapper[4698]: I0216 00:33:20.082438 4698 scope.go:117] "RemoveContainer" containerID="9d991030e86d0af72cfcbf90f1abdcfdf9329f7908145d63f205c37601a37269" Feb 16 00:33:20 crc kubenswrapper[4698]: I0216 00:33:20.098528 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93bcbf37-570a-4f40-bbd0-a7a080d0b0a1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:33:20 crc kubenswrapper[4698]: I0216 00:33:20.111996 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t2bd5"] Feb 16 00:33:20 crc kubenswrapper[4698]: I0216 00:33:20.118465 4698 scope.go:117] "RemoveContainer" containerID="0dacef1cdb92c9ad0f239d5a507de94ef589a3ff7a48acfc6cdd6ed244f6639f" Feb 16 00:33:20 crc kubenswrapper[4698]: I0216 00:33:20.124203 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t2bd5"] Feb 16 00:33:20 crc kubenswrapper[4698]: I0216 00:33:20.146219 4698 scope.go:117] "RemoveContainer" containerID="27fdb6610c495ece8f474794c21e04db836b67d23aefc43c2fec5da2e3cd47fc" Feb 16 00:33:20 crc kubenswrapper[4698]: E0216 00:33:20.147096 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27fdb6610c495ece8f474794c21e04db836b67d23aefc43c2fec5da2e3cd47fc\": container with ID starting with 27fdb6610c495ece8f474794c21e04db836b67d23aefc43c2fec5da2e3cd47fc not found: ID does not exist" containerID="27fdb6610c495ece8f474794c21e04db836b67d23aefc43c2fec5da2e3cd47fc" Feb 16 00:33:20 crc kubenswrapper[4698]: I0216 00:33:20.147174 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27fdb6610c495ece8f474794c21e04db836b67d23aefc43c2fec5da2e3cd47fc"} err="failed to get container status \"27fdb6610c495ece8f474794c21e04db836b67d23aefc43c2fec5da2e3cd47fc\": rpc error: code = NotFound desc = could not find container \"27fdb6610c495ece8f474794c21e04db836b67d23aefc43c2fec5da2e3cd47fc\": container with ID starting with 27fdb6610c495ece8f474794c21e04db836b67d23aefc43c2fec5da2e3cd47fc not found: ID does not exist" Feb 16 00:33:20 crc kubenswrapper[4698]: I0216 00:33:20.147219 4698 scope.go:117] "RemoveContainer" containerID="9d991030e86d0af72cfcbf90f1abdcfdf9329f7908145d63f205c37601a37269" Feb 16 00:33:20 crc kubenswrapper[4698]: E0216 00:33:20.147588 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d991030e86d0af72cfcbf90f1abdcfdf9329f7908145d63f205c37601a37269\": container with ID starting with 9d991030e86d0af72cfcbf90f1abdcfdf9329f7908145d63f205c37601a37269 not found: ID does not exist" containerID="9d991030e86d0af72cfcbf90f1abdcfdf9329f7908145d63f205c37601a37269" Feb 16 00:33:20 crc kubenswrapper[4698]: I0216 00:33:20.147700 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d991030e86d0af72cfcbf90f1abdcfdf9329f7908145d63f205c37601a37269"} err="failed to get container status \"9d991030e86d0af72cfcbf90f1abdcfdf9329f7908145d63f205c37601a37269\": rpc error: code = NotFound desc = could not find container \"9d991030e86d0af72cfcbf90f1abdcfdf9329f7908145d63f205c37601a37269\": container with ID starting with 9d991030e86d0af72cfcbf90f1abdcfdf9329f7908145d63f205c37601a37269 not found: ID does not exist" Feb 16 00:33:20 crc kubenswrapper[4698]: I0216 00:33:20.147747 4698 scope.go:117] "RemoveContainer" containerID="0dacef1cdb92c9ad0f239d5a507de94ef589a3ff7a48acfc6cdd6ed244f6639f" Feb 16 00:33:20 crc kubenswrapper[4698]: E0216 00:33:20.148418 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dacef1cdb92c9ad0f239d5a507de94ef589a3ff7a48acfc6cdd6ed244f6639f\": container with ID starting with 0dacef1cdb92c9ad0f239d5a507de94ef589a3ff7a48acfc6cdd6ed244f6639f not found: ID does not exist" containerID="0dacef1cdb92c9ad0f239d5a507de94ef589a3ff7a48acfc6cdd6ed244f6639f" Feb 16 00:33:20 crc kubenswrapper[4698]: I0216 00:33:20.148461 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dacef1cdb92c9ad0f239d5a507de94ef589a3ff7a48acfc6cdd6ed244f6639f"} err="failed to get container status \"0dacef1cdb92c9ad0f239d5a507de94ef589a3ff7a48acfc6cdd6ed244f6639f\": rpc error: code = NotFound desc = could not find container \"0dacef1cdb92c9ad0f239d5a507de94ef589a3ff7a48acfc6cdd6ed244f6639f\": container with ID starting with 0dacef1cdb92c9ad0f239d5a507de94ef589a3ff7a48acfc6cdd6ed244f6639f not found: ID does not exist" Feb 16 00:33:20 crc kubenswrapper[4698]: I0216 00:33:20.677314 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-59bf6579cc-jbsp9_e158a3a2-4367-4cfb-8d31-085f96d9dc6a/operator/0.log" Feb 16 00:33:21 crc kubenswrapper[4698]: I0216 00:33:21.241391 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93bcbf37-570a-4f40-bbd0-a7a080d0b0a1" path="/var/lib/kubelet/pods/93bcbf37-570a-4f40-bbd0-a7a080d0b0a1/volumes" Feb 16 00:33:24 crc kubenswrapper[4698]: I0216 00:33:24.142669 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-6f55f6c4c5-56jtc_b896217b-c297-4228-8940-d2e0a2f7547f/operator/0.log" Feb 16 00:33:24 crc kubenswrapper[4698]: I0216 00:33:24.429754 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_9b9c31ea-9618-447d-9085-c0eb0d81a77e/qdr/0.log" Feb 16 00:33:29 crc kubenswrapper[4698]: I0216 00:33:29.232453 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:33:29 crc kubenswrapper[4698]: E0216 00:33:29.233434 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:33:32 crc kubenswrapper[4698]: I0216 00:33:32.133659 4698 scope.go:117] "RemoveContainer" containerID="56be1e33b378063f3e5a144f605dbad8ae243c499d5eb5418645650a45b6dc2d" Feb 16 00:33:32 crc kubenswrapper[4698]: I0216 00:33:32.178801 4698 scope.go:117] "RemoveContainer" containerID="370ec32c037198e76fc305348018ace6d03f67e90f36202e7dc34ac204a4f151" Feb 16 00:33:41 crc kubenswrapper[4698]: I0216 00:33:41.236165 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:33:41 crc kubenswrapper[4698]: E0216 00:33:41.236911 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:33:50 crc kubenswrapper[4698]: I0216 00:33:50.076981 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zlrxx/must-gather-xqr5q"] Feb 16 00:33:50 crc kubenswrapper[4698]: E0216 00:33:50.077870 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93bcbf37-570a-4f40-bbd0-a7a080d0b0a1" containerName="extract-content" Feb 16 00:33:50 crc kubenswrapper[4698]: I0216 00:33:50.077888 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="93bcbf37-570a-4f40-bbd0-a7a080d0b0a1" containerName="extract-content" Feb 16 00:33:50 crc kubenswrapper[4698]: E0216 00:33:50.077905 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93bcbf37-570a-4f40-bbd0-a7a080d0b0a1" containerName="extract-utilities" Feb 16 00:33:50 crc kubenswrapper[4698]: I0216 00:33:50.077914 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="93bcbf37-570a-4f40-bbd0-a7a080d0b0a1" containerName="extract-utilities" Feb 16 00:33:50 crc kubenswrapper[4698]: E0216 00:33:50.077927 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93bcbf37-570a-4f40-bbd0-a7a080d0b0a1" containerName="registry-server" Feb 16 00:33:50 crc kubenswrapper[4698]: I0216 00:33:50.077935 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="93bcbf37-570a-4f40-bbd0-a7a080d0b0a1" containerName="registry-server" Feb 16 00:33:50 crc kubenswrapper[4698]: I0216 00:33:50.078102 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="93bcbf37-570a-4f40-bbd0-a7a080d0b0a1" containerName="registry-server" Feb 16 00:33:50 crc kubenswrapper[4698]: I0216 00:33:50.079975 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zlrxx/must-gather-xqr5q" Feb 16 00:33:50 crc kubenswrapper[4698]: I0216 00:33:50.086792 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zlrxx"/"kube-root-ca.crt" Feb 16 00:33:50 crc kubenswrapper[4698]: I0216 00:33:50.087012 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zlrxx"/"openshift-service-ca.crt" Feb 16 00:33:50 crc kubenswrapper[4698]: I0216 00:33:50.102322 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zlrxx/must-gather-xqr5q"] Feb 16 00:33:50 crc kubenswrapper[4698]: I0216 00:33:50.237073 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5-must-gather-output\") pod \"must-gather-xqr5q\" (UID: \"d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5\") " pod="openshift-must-gather-zlrxx/must-gather-xqr5q" Feb 16 00:33:50 crc kubenswrapper[4698]: I0216 00:33:50.237343 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8wgf\" (UniqueName: \"kubernetes.io/projected/d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5-kube-api-access-l8wgf\") pod \"must-gather-xqr5q\" (UID: \"d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5\") " pod="openshift-must-gather-zlrxx/must-gather-xqr5q" Feb 16 00:33:50 crc kubenswrapper[4698]: I0216 00:33:50.338556 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5-must-gather-output\") pod \"must-gather-xqr5q\" (UID: \"d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5\") " pod="openshift-must-gather-zlrxx/must-gather-xqr5q" Feb 16 00:33:50 crc kubenswrapper[4698]: I0216 00:33:50.338718 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8wgf\" (UniqueName: \"kubernetes.io/projected/d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5-kube-api-access-l8wgf\") pod \"must-gather-xqr5q\" (UID: \"d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5\") " pod="openshift-must-gather-zlrxx/must-gather-xqr5q" Feb 16 00:33:50 crc kubenswrapper[4698]: I0216 00:33:50.339083 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5-must-gather-output\") pod \"must-gather-xqr5q\" (UID: \"d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5\") " pod="openshift-must-gather-zlrxx/must-gather-xqr5q" Feb 16 00:33:50 crc kubenswrapper[4698]: I0216 00:33:50.359291 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8wgf\" (UniqueName: \"kubernetes.io/projected/d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5-kube-api-access-l8wgf\") pod \"must-gather-xqr5q\" (UID: \"d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5\") " pod="openshift-must-gather-zlrxx/must-gather-xqr5q" Feb 16 00:33:50 crc kubenswrapper[4698]: I0216 00:33:50.400683 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zlrxx/must-gather-xqr5q" Feb 16 00:33:50 crc kubenswrapper[4698]: I0216 00:33:50.659061 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zlrxx/must-gather-xqr5q"] Feb 16 00:33:51 crc kubenswrapper[4698]: I0216 00:33:51.342570 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zlrxx/must-gather-xqr5q" event={"ID":"d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5","Type":"ContainerStarted","Data":"6e5d33f9b13727d894b95e337d750c72d6966f47fbef47dd4dbe5a6f53de3d6f"} Feb 16 00:33:55 crc kubenswrapper[4698]: I0216 00:33:55.231671 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:33:55 crc kubenswrapper[4698]: E0216 00:33:55.232801 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:33:59 crc kubenswrapper[4698]: I0216 00:33:59.414108 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zlrxx/must-gather-xqr5q" event={"ID":"d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5","Type":"ContainerStarted","Data":"8c37d19b4b001f737597891ae64a95b117a984c87d1086cd18a8ceb839e55490"} Feb 16 00:33:59 crc kubenswrapper[4698]: I0216 00:33:59.414559 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zlrxx/must-gather-xqr5q" event={"ID":"d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5","Type":"ContainerStarted","Data":"a96e2837e56e511497e10428abee0d1a4efa4beacb5f8823a30ef45e885f31a5"} Feb 16 00:33:59 crc kubenswrapper[4698]: I0216 00:33:59.440066 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zlrxx/must-gather-xqr5q" podStartSLOduration=1.621426233 podStartE2EDuration="9.440045468s" podCreationTimestamp="2026-02-16 00:33:50 +0000 UTC" firstStartedPulling="2026-02-16 00:33:50.660649907 +0000 UTC m=+1640.318548669" lastFinishedPulling="2026-02-16 00:33:58.479269102 +0000 UTC m=+1648.137167904" observedRunningTime="2026-02-16 00:33:59.430969205 +0000 UTC m=+1649.088868037" watchObservedRunningTime="2026-02-16 00:33:59.440045468 +0000 UTC m=+1649.097944250" Feb 16 00:34:10 crc kubenswrapper[4698]: I0216 00:34:10.232334 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:34:10 crc kubenswrapper[4698]: E0216 00:34:10.234864 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:34:24 crc kubenswrapper[4698]: I0216 00:34:24.231351 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:34:24 crc kubenswrapper[4698]: E0216 00:34:24.232076 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:34:35 crc kubenswrapper[4698]: I0216 00:34:35.232266 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:34:35 crc kubenswrapper[4698]: E0216 00:34:35.232848 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:34:45 crc kubenswrapper[4698]: I0216 00:34:45.687585 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dmvbr_b89dcc24-5331-4c05-9a27-5f4415a7faf1/control-plane-machine-set-operator/0.log" Feb 16 00:34:45 crc kubenswrapper[4698]: I0216 00:34:45.790980 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-v8kwd_f6a60c76-1a30-4b5c-a984-08eef4aedb2b/kube-rbac-proxy/0.log" Feb 16 00:34:45 crc kubenswrapper[4698]: I0216 00:34:45.828043 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-v8kwd_f6a60c76-1a30-4b5c-a984-08eef4aedb2b/machine-api-operator/0.log" Feb 16 00:34:46 crc kubenswrapper[4698]: I0216 00:34:46.231682 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:34:46 crc kubenswrapper[4698]: E0216 00:34:46.231973 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:34:59 crc kubenswrapper[4698]: I0216 00:34:59.054240 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-pjqpd_8f38cd8e-5e59-4142-9cb0-acd83e924991/cert-manager-controller/0.log" Feb 16 00:34:59 crc kubenswrapper[4698]: I0216 00:34:59.184403 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-h8hzz_78cfa9e0-d6da-44f0-94ea-8067668d7efa/cert-manager-cainjector/0.log" Feb 16 00:34:59 crc kubenswrapper[4698]: I0216 00:34:59.235897 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-2rgj5_2661205f-64a5-4fd6-b7a2-a243fb57a87a/cert-manager-webhook/0.log" Feb 16 00:35:01 crc kubenswrapper[4698]: I0216 00:35:01.238394 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:35:01 crc kubenswrapper[4698]: E0216 00:35:01.238901 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:35:14 crc kubenswrapper[4698]: I0216 00:35:14.232130 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:35:14 crc kubenswrapper[4698]: E0216 00:35:14.232956 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:35:14 crc kubenswrapper[4698]: I0216 00:35:14.807491 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-qkpw2_8d84ce9c-7712-4137-8b1e-d5c2ce3b413b/prometheus-operator/0.log" Feb 16 00:35:15 crc kubenswrapper[4698]: I0216 00:35:15.049675 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-676f96946c-qkdb5_6dcc2d08-cb5c-43ba-b568-992bfcbf9ed4/prometheus-operator-admission-webhook/0.log" Feb 16 00:35:15 crc kubenswrapper[4698]: I0216 00:35:15.073913 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-676f96946c-7rnfm_f8612dc6-5549-459e-8e2d-16851e88463c/prometheus-operator-admission-webhook/0.log" Feb 16 00:35:15 crc kubenswrapper[4698]: I0216 00:35:15.268160 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-5dhfb_03ed7c21-b695-42b0-a85e-3dec0cb7595c/operator/0.log" Feb 16 00:35:15 crc kubenswrapper[4698]: I0216 00:35:15.268984 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-kb9qw_5a0b53be-4b28-4554-85bd-ddb9f580423e/perses-operator/0.log" Feb 16 00:35:26 crc kubenswrapper[4698]: I0216 00:35:26.232066 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:35:26 crc kubenswrapper[4698]: E0216 00:35:26.233125 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:35:30 crc kubenswrapper[4698]: I0216 00:35:30.471810 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx_b9a8da53-3db9-4151-9170-1ec4f853c766/util/0.log" Feb 16 00:35:30 crc kubenswrapper[4698]: I0216 00:35:30.688388 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx_b9a8da53-3db9-4151-9170-1ec4f853c766/util/0.log" Feb 16 00:35:30 crc kubenswrapper[4698]: I0216 00:35:30.701960 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx_b9a8da53-3db9-4151-9170-1ec4f853c766/pull/0.log" Feb 16 00:35:30 crc kubenswrapper[4698]: I0216 00:35:30.705562 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx_b9a8da53-3db9-4151-9170-1ec4f853c766/pull/0.log" Feb 16 00:35:30 crc kubenswrapper[4698]: I0216 00:35:30.864049 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx_b9a8da53-3db9-4151-9170-1ec4f853c766/util/0.log" Feb 16 00:35:30 crc kubenswrapper[4698]: I0216 00:35:30.864818 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx_b9a8da53-3db9-4151-9170-1ec4f853c766/pull/0.log" Feb 16 00:35:30 crc kubenswrapper[4698]: I0216 00:35:30.895008 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1gxkzx_b9a8da53-3db9-4151-9170-1ec4f853c766/extract/0.log" Feb 16 00:35:31 crc kubenswrapper[4698]: I0216 00:35:31.038557 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4_18c8a3ac-1fef-4511-baac-d9ca8e2b7a49/util/0.log" Feb 16 00:35:31 crc kubenswrapper[4698]: I0216 00:35:31.194581 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4_18c8a3ac-1fef-4511-baac-d9ca8e2b7a49/pull/0.log" Feb 16 00:35:31 crc kubenswrapper[4698]: I0216 00:35:31.203304 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4_18c8a3ac-1fef-4511-baac-d9ca8e2b7a49/util/0.log" Feb 16 00:35:31 crc kubenswrapper[4698]: I0216 00:35:31.245099 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4_18c8a3ac-1fef-4511-baac-d9ca8e2b7a49/pull/0.log" Feb 16 00:35:31 crc kubenswrapper[4698]: I0216 00:35:31.419479 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4_18c8a3ac-1fef-4511-baac-d9ca8e2b7a49/util/0.log" Feb 16 00:35:31 crc kubenswrapper[4698]: I0216 00:35:31.423968 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4_18c8a3ac-1fef-4511-baac-d9ca8e2b7a49/extract/0.log" Feb 16 00:35:31 crc kubenswrapper[4698]: I0216 00:35:31.426497 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fzbsz4_18c8a3ac-1fef-4511-baac-d9ca8e2b7a49/pull/0.log" Feb 16 00:35:31 crc kubenswrapper[4698]: I0216 00:35:31.581318 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk_31482fcb-ffd3-40fe-a5fc-5b21d6b522ce/util/0.log" Feb 16 00:35:31 crc kubenswrapper[4698]: I0216 00:35:31.747890 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk_31482fcb-ffd3-40fe-a5fc-5b21d6b522ce/util/0.log" Feb 16 00:35:31 crc kubenswrapper[4698]: I0216 00:35:31.772112 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk_31482fcb-ffd3-40fe-a5fc-5b21d6b522ce/pull/0.log" Feb 16 00:35:31 crc kubenswrapper[4698]: I0216 00:35:31.776277 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk_31482fcb-ffd3-40fe-a5fc-5b21d6b522ce/pull/0.log" Feb 16 00:35:31 crc kubenswrapper[4698]: I0216 00:35:31.924679 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk_31482fcb-ffd3-40fe-a5fc-5b21d6b522ce/extract/0.log" Feb 16 00:35:31 crc kubenswrapper[4698]: I0216 00:35:31.929987 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk_31482fcb-ffd3-40fe-a5fc-5b21d6b522ce/pull/0.log" Feb 16 00:35:31 crc kubenswrapper[4698]: I0216 00:35:31.943398 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xjqxk_31482fcb-ffd3-40fe-a5fc-5b21d6b522ce/util/0.log" Feb 16 00:35:32 crc kubenswrapper[4698]: I0216 00:35:32.101767 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2_0910281b-5250-4f30-bd3b-966d88ce449a/util/0.log" Feb 16 00:35:32 crc kubenswrapper[4698]: I0216 00:35:32.280578 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2_0910281b-5250-4f30-bd3b-966d88ce449a/pull/0.log" Feb 16 00:35:32 crc kubenswrapper[4698]: I0216 00:35:32.280717 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2_0910281b-5250-4f30-bd3b-966d88ce449a/pull/0.log" Feb 16 00:35:32 crc kubenswrapper[4698]: I0216 00:35:32.283363 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2_0910281b-5250-4f30-bd3b-966d88ce449a/util/0.log" Feb 16 00:35:32 crc kubenswrapper[4698]: I0216 00:35:32.476703 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2_0910281b-5250-4f30-bd3b-966d88ce449a/extract/0.log" Feb 16 00:35:32 crc kubenswrapper[4698]: I0216 00:35:32.478281 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2_0910281b-5250-4f30-bd3b-966d88ce449a/util/0.log" Feb 16 00:35:32 crc kubenswrapper[4698]: I0216 00:35:32.483865 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cbfl2_0910281b-5250-4f30-bd3b-966d88ce449a/pull/0.log" Feb 16 00:35:32 crc kubenswrapper[4698]: I0216 00:35:32.689089 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bmzl7_e4f60670-6117-4fd1-b177-bd5b801a669f/extract-utilities/0.log" Feb 16 00:35:32 crc kubenswrapper[4698]: I0216 00:35:32.823413 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bmzl7_e4f60670-6117-4fd1-b177-bd5b801a669f/extract-utilities/0.log" Feb 16 00:35:32 crc kubenswrapper[4698]: I0216 00:35:32.838136 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bmzl7_e4f60670-6117-4fd1-b177-bd5b801a669f/extract-content/0.log" Feb 16 00:35:32 crc kubenswrapper[4698]: I0216 00:35:32.860239 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bmzl7_e4f60670-6117-4fd1-b177-bd5b801a669f/extract-content/0.log" Feb 16 00:35:32 crc kubenswrapper[4698]: I0216 00:35:32.999746 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bmzl7_e4f60670-6117-4fd1-b177-bd5b801a669f/extract-utilities/0.log" Feb 16 00:35:33 crc kubenswrapper[4698]: I0216 00:35:33.073065 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bmzl7_e4f60670-6117-4fd1-b177-bd5b801a669f/extract-content/0.log" Feb 16 00:35:33 crc kubenswrapper[4698]: I0216 00:35:33.198899 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9n7sm_2d6a6915-3d48-4029-a022-26658bb88374/extract-utilities/0.log" Feb 16 00:35:33 crc kubenswrapper[4698]: I0216 00:35:33.231822 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bmzl7_e4f60670-6117-4fd1-b177-bd5b801a669f/registry-server/0.log" Feb 16 00:35:33 crc kubenswrapper[4698]: I0216 00:35:33.368243 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9n7sm_2d6a6915-3d48-4029-a022-26658bb88374/extract-content/0.log" Feb 16 00:35:33 crc kubenswrapper[4698]: I0216 00:35:33.410609 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9n7sm_2d6a6915-3d48-4029-a022-26658bb88374/extract-utilities/0.log" Feb 16 00:35:33 crc kubenswrapper[4698]: I0216 00:35:33.410752 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9n7sm_2d6a6915-3d48-4029-a022-26658bb88374/extract-content/0.log" Feb 16 00:35:33 crc kubenswrapper[4698]: I0216 00:35:33.512842 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9n7sm_2d6a6915-3d48-4029-a022-26658bb88374/extract-utilities/0.log" Feb 16 00:35:33 crc kubenswrapper[4698]: I0216 00:35:33.522262 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9n7sm_2d6a6915-3d48-4029-a022-26658bb88374/extract-content/0.log" Feb 16 00:35:33 crc kubenswrapper[4698]: I0216 00:35:33.773065 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-222qh_ce91d9bb-94cd-4bd8-8116-1add3e921236/marketplace-operator/0.log" Feb 16 00:35:33 crc kubenswrapper[4698]: I0216 00:35:33.842114 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9n7sm_2d6a6915-3d48-4029-a022-26658bb88374/registry-server/0.log" Feb 16 00:35:33 crc kubenswrapper[4698]: I0216 00:35:33.884804 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pw888_3b2a749a-0657-4670-8028-451cde6de012/extract-utilities/0.log" Feb 16 00:35:34 crc kubenswrapper[4698]: I0216 00:35:34.002417 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pw888_3b2a749a-0657-4670-8028-451cde6de012/extract-utilities/0.log" Feb 16 00:35:34 crc kubenswrapper[4698]: I0216 00:35:34.002447 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pw888_3b2a749a-0657-4670-8028-451cde6de012/extract-content/0.log" Feb 16 00:35:34 crc kubenswrapper[4698]: I0216 00:35:34.004930 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pw888_3b2a749a-0657-4670-8028-451cde6de012/extract-content/0.log" Feb 16 00:35:34 crc kubenswrapper[4698]: I0216 00:35:34.153007 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pw888_3b2a749a-0657-4670-8028-451cde6de012/extract-content/0.log" Feb 16 00:35:34 crc kubenswrapper[4698]: I0216 00:35:34.154462 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pw888_3b2a749a-0657-4670-8028-451cde6de012/extract-utilities/0.log" Feb 16 00:35:34 crc kubenswrapper[4698]: I0216 00:35:34.381839 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pw888_3b2a749a-0657-4670-8028-451cde6de012/registry-server/0.log" Feb 16 00:35:37 crc kubenswrapper[4698]: I0216 00:35:37.232345 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:35:37 crc kubenswrapper[4698]: E0216 00:35:37.233141 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:35:47 crc kubenswrapper[4698]: I0216 00:35:47.161366 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-676f96946c-qkdb5_6dcc2d08-cb5c-43ba-b568-992bfcbf9ed4/prometheus-operator-admission-webhook/0.log" Feb 16 00:35:47 crc kubenswrapper[4698]: I0216 00:35:47.168060 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-qkpw2_8d84ce9c-7712-4137-8b1e-d5c2ce3b413b/prometheus-operator/0.log" Feb 16 00:35:47 crc kubenswrapper[4698]: I0216 00:35:47.192478 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-676f96946c-7rnfm_f8612dc6-5549-459e-8e2d-16851e88463c/prometheus-operator-admission-webhook/0.log" Feb 16 00:35:47 crc kubenswrapper[4698]: I0216 00:35:47.294857 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-5dhfb_03ed7c21-b695-42b0-a85e-3dec0cb7595c/operator/0.log" Feb 16 00:35:47 crc kubenswrapper[4698]: I0216 00:35:47.358005 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-kb9qw_5a0b53be-4b28-4554-85bd-ddb9f580423e/perses-operator/0.log" Feb 16 00:35:50 crc kubenswrapper[4698]: I0216 00:35:50.232268 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:35:50 crc kubenswrapper[4698]: E0216 00:35:50.233071 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:36:05 crc kubenswrapper[4698]: I0216 00:36:05.232131 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:36:05 crc kubenswrapper[4698]: E0216 00:36:05.233007 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:36:20 crc kubenswrapper[4698]: I0216 00:36:20.231935 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:36:20 crc kubenswrapper[4698]: E0216 00:36:20.232731 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z56m2_openshift-machine-config-operator(7b351654-277f-4d0d-84f9-b003f934936c)\"" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" Feb 16 00:36:34 crc kubenswrapper[4698]: I0216 00:36:34.232499 4698 scope.go:117] "RemoveContainer" containerID="6d12be3f716e2ace930650c221e883f4477e2a95a512b61dc0b3c472737710cf" Feb 16 00:36:34 crc kubenswrapper[4698]: I0216 00:36:34.683429 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" event={"ID":"7b351654-277f-4d0d-84f9-b003f934936c","Type":"ContainerStarted","Data":"ad1134f2d26a5ecfb34e0bc7395a23d004d447bce7e677c6fd340fc553da6421"} Feb 16 00:36:37 crc kubenswrapper[4698]: I0216 00:36:37.718224 4698 generic.go:334] "Generic (PLEG): container finished" podID="d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5" containerID="a96e2837e56e511497e10428abee0d1a4efa4beacb5f8823a30ef45e885f31a5" exitCode=0 Feb 16 00:36:37 crc kubenswrapper[4698]: I0216 00:36:37.718371 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zlrxx/must-gather-xqr5q" event={"ID":"d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5","Type":"ContainerDied","Data":"a96e2837e56e511497e10428abee0d1a4efa4beacb5f8823a30ef45e885f31a5"} Feb 16 00:36:37 crc kubenswrapper[4698]: I0216 00:36:37.719301 4698 scope.go:117] "RemoveContainer" containerID="a96e2837e56e511497e10428abee0d1a4efa4beacb5f8823a30ef45e885f31a5" Feb 16 00:36:37 crc kubenswrapper[4698]: I0216 00:36:37.826299 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zlrxx_must-gather-xqr5q_d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5/gather/0.log" Feb 16 00:36:44 crc kubenswrapper[4698]: I0216 00:36:44.612788 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zlrxx/must-gather-xqr5q"] Feb 16 00:36:44 crc kubenswrapper[4698]: I0216 00:36:44.613854 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-zlrxx/must-gather-xqr5q" podUID="d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5" containerName="copy" containerID="cri-o://8c37d19b4b001f737597891ae64a95b117a984c87d1086cd18a8ceb839e55490" gracePeriod=2 Feb 16 00:36:44 crc kubenswrapper[4698]: I0216 00:36:44.622538 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zlrxx/must-gather-xqr5q"] Feb 16 00:36:44 crc kubenswrapper[4698]: I0216 00:36:44.780025 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zlrxx_must-gather-xqr5q_d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5/copy/0.log" Feb 16 00:36:44 crc kubenswrapper[4698]: I0216 00:36:44.780441 4698 generic.go:334] "Generic (PLEG): container finished" podID="d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5" containerID="8c37d19b4b001f737597891ae64a95b117a984c87d1086cd18a8ceb839e55490" exitCode=143 Feb 16 00:36:45 crc kubenswrapper[4698]: I0216 00:36:45.053217 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zlrxx_must-gather-xqr5q_d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5/copy/0.log" Feb 16 00:36:45 crc kubenswrapper[4698]: I0216 00:36:45.053933 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zlrxx/must-gather-xqr5q" Feb 16 00:36:45 crc kubenswrapper[4698]: I0216 00:36:45.221326 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5-must-gather-output\") pod \"d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5\" (UID: \"d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5\") " Feb 16 00:36:45 crc kubenswrapper[4698]: I0216 00:36:45.221499 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8wgf\" (UniqueName: \"kubernetes.io/projected/d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5-kube-api-access-l8wgf\") pod \"d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5\" (UID: \"d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5\") " Feb 16 00:36:45 crc kubenswrapper[4698]: I0216 00:36:45.227820 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5-kube-api-access-l8wgf" (OuterVolumeSpecName: "kube-api-access-l8wgf") pod "d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5" (UID: "d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5"). InnerVolumeSpecName "kube-api-access-l8wgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 00:36:45 crc kubenswrapper[4698]: I0216 00:36:45.283196 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5" (UID: "d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 00:36:45 crc kubenswrapper[4698]: I0216 00:36:45.323110 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8wgf\" (UniqueName: \"kubernetes.io/projected/d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5-kube-api-access-l8wgf\") on node \"crc\" DevicePath \"\"" Feb 16 00:36:45 crc kubenswrapper[4698]: I0216 00:36:45.323163 4698 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 00:36:45 crc kubenswrapper[4698]: I0216 00:36:45.789082 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zlrxx_must-gather-xqr5q_d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5/copy/0.log" Feb 16 00:36:45 crc kubenswrapper[4698]: I0216 00:36:45.790011 4698 scope.go:117] "RemoveContainer" containerID="8c37d19b4b001f737597891ae64a95b117a984c87d1086cd18a8ceb839e55490" Feb 16 00:36:45 crc kubenswrapper[4698]: I0216 00:36:45.790133 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zlrxx/must-gather-xqr5q" Feb 16 00:36:45 crc kubenswrapper[4698]: I0216 00:36:45.813188 4698 scope.go:117] "RemoveContainer" containerID="a96e2837e56e511497e10428abee0d1a4efa4beacb5f8823a30ef45e885f31a5" Feb 16 00:36:47 crc kubenswrapper[4698]: I0216 00:36:47.246033 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5" path="/var/lib/kubelet/pods/d6bb72ca-5c3a-4d3f-845c-7f82bddf6ae5/volumes" Feb 16 00:38:57 crc kubenswrapper[4698]: I0216 00:38:57.045893 4698 patch_prober.go:28] interesting pod/machine-config-daemon-z56m2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:38:57 crc kubenswrapper[4698]: I0216 00:38:57.046598 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z56m2" podUID="7b351654-277f-4d0d-84f9-b003f934936c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515144463463024457 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015144463464017375 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015144457243016516 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015144457243015466 5ustar corecore